I1228 21:09:04.070745 8 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I1228 21:09:04.071181 8 e2e.go:109] Starting e2e run "c225796b-88cb-41bc-9256-0a10e9f1f399" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1577567342 - Will randomize all specs Will run 278 of 4814 specs Dec 28 21:09:04.144: INFO: >>> kubeConfig: /root/.kube/config Dec 28 21:09:04.149: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Dec 28 21:09:04.170: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Dec 28 21:09:04.201: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Dec 28 21:09:04.201: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Dec 28 21:09:04.201: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Dec 28 21:09:04.294: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Dec 28 21:09:04.294: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Dec 28 21:09:04.294: INFO: e2e test version: v1.17.0 Dec 28 21:09:04.297: INFO: kube-apiserver version: v1.16.1 Dec 28 21:09:04.297: INFO: >>> kubeConfig: /root/.kube/config Dec 28 21:09:04.309: INFO: Cluster IP family: ipv4 SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:09:04.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir Dec 28 21:09:04.417: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Dec 28 21:09:04.435: INFO: Waiting up to 5m0s for pod "pod-c340630f-5b6b-47e9-8538-2bef79393827" in namespace "emptydir-5966" to be "success or failure" Dec 28 21:09:04.453: INFO: Pod "pod-c340630f-5b6b-47e9-8538-2bef79393827": Phase="Pending", Reason="", readiness=false. Elapsed: 17.482189ms Dec 28 21:09:06.472: INFO: Pod "pod-c340630f-5b6b-47e9-8538-2bef79393827": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036284833s Dec 28 21:09:08.486: INFO: Pod "pod-c340630f-5b6b-47e9-8538-2bef79393827": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050986839s Dec 28 21:09:10.549: INFO: Pod "pod-c340630f-5b6b-47e9-8538-2bef79393827": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.113349886s STEP: Saw pod success Dec 28 21:09:10.549: INFO: Pod "pod-c340630f-5b6b-47e9-8538-2bef79393827" satisfied condition "success or failure" Dec 28 21:09:10.563: INFO: Trying to get logs from node jerma-node pod pod-c340630f-5b6b-47e9-8538-2bef79393827 container test-container: STEP: delete the pod Dec 28 21:09:10.807: INFO: Waiting for pod pod-c340630f-5b6b-47e9-8538-2bef79393827 to disappear Dec 28 21:09:10.834: INFO: Pod pod-c340630f-5b6b-47e9-8538-2bef79393827 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:09:10.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5966" for this suite. • [SLOW TEST:6.588 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":1,"skipped":9,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:09:10.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Dec 28 21:09:27.197: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 28 21:09:27.210: INFO: Pod pod-with-prestop-exec-hook still exists Dec 28 21:09:29.210: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 28 21:09:29.219: INFO: Pod pod-with-prestop-exec-hook still exists Dec 28 21:09:31.210: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 28 21:09:31.220: INFO: Pod pod-with-prestop-exec-hook still exists Dec 28 21:09:33.210: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 28 21:09:33.219: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:09:33.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-151" for this suite. • [SLOW TEST:22.362 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":30,"failed":0} SSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:09:33.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 28 21:09:33.380: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-be8cc4a7-f334-49cd-bc07-2a18e4ead879" in namespace "security-context-test-445" to be "success or failure" Dec 28 21:09:33.389: INFO: Pod "busybox-privileged-false-be8cc4a7-f334-49cd-bc07-2a18e4ead879": Phase="Pending", Reason="", readiness=false. Elapsed: 8.265636ms Dec 28 21:09:35.427: INFO: Pod "busybox-privileged-false-be8cc4a7-f334-49cd-bc07-2a18e4ead879": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046826811s Dec 28 21:09:37.441: INFO: Pod "busybox-privileged-false-be8cc4a7-f334-49cd-bc07-2a18e4ead879": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060727999s Dec 28 21:09:39.450: INFO: Pod "busybox-privileged-false-be8cc4a7-f334-49cd-bc07-2a18e4ead879": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070009478s Dec 28 21:09:41.460: INFO: Pod "busybox-privileged-false-be8cc4a7-f334-49cd-bc07-2a18e4ead879": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.080005939s Dec 28 21:09:41.460: INFO: Pod "busybox-privileged-false-be8cc4a7-f334-49cd-bc07-2a18e4ead879" satisfied condition "success or failure" Dec 28 21:09:41.474: INFO: Got logs for pod "busybox-privileged-false-be8cc4a7-f334-49cd-bc07-2a18e4ead879": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:09:41.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-445" for this suite. • [SLOW TEST:8.227 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a pod with privileged /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:225 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":33,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:09:41.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Dec 28 21:09:41.592: INFO: Waiting up to 5m0s for pod "pod-e21db61a-9add-4bfc-8a5f-56bab6df6a7e" in namespace "emptydir-5129" to be "success or failure" Dec 28 21:09:41.600: INFO: Pod "pod-e21db61a-9add-4bfc-8a5f-56bab6df6a7e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.127777ms Dec 28 21:09:43.613: INFO: Pod "pod-e21db61a-9add-4bfc-8a5f-56bab6df6a7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021370072s Dec 28 21:09:45.659: INFO: Pod "pod-e21db61a-9add-4bfc-8a5f-56bab6df6a7e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067139321s Dec 28 21:09:47.668: INFO: Pod "pod-e21db61a-9add-4bfc-8a5f-56bab6df6a7e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076334315s Dec 28 21:09:49.679: INFO: Pod "pod-e21db61a-9add-4bfc-8a5f-56bab6df6a7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.086575714s STEP: Saw pod success Dec 28 21:09:49.679: INFO: Pod "pod-e21db61a-9add-4bfc-8a5f-56bab6df6a7e" satisfied condition "success or failure" Dec 28 21:09:49.682: INFO: Trying to get logs from node jerma-node pod pod-e21db61a-9add-4bfc-8a5f-56bab6df6a7e container test-container: STEP: delete the pod Dec 28 21:09:49.724: INFO: Waiting for pod pod-e21db61a-9add-4bfc-8a5f-56bab6df6a7e to disappear Dec 28 21:09:49.730: INFO: Pod pod-e21db61a-9add-4bfc-8a5f-56bab6df6a7e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:09:49.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5129" for this suite. • [SLOW TEST:8.255 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":4,"skipped":42,"failed":0} SSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:09:49.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W1228 21:10:05.442738 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 28 21:10:05.443: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:10:05.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4355" for this suite. • [SLOW TEST:15.777 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":5,"skipped":46,"failed":0} SSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:10:05.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-4319 STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 28 21:10:07.174: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Dec 28 21:10:52.201: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-4319 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 28 21:10:52.201: INFO: >>> kubeConfig: /root/.kube/config Dec 28 21:10:52.569: INFO: Waiting for responses: map[] Dec 28 21:10:52.586: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.32.0.5&port=8081&tries=1'] Namespace:pod-network-test-4319 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 28 21:10:52.586: INFO: >>> kubeConfig: /root/.kube/config Dec 28 21:10:52.838: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:10:52.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4319" for this suite. • [SLOW TEST:47.344 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":53,"failed":0} SSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:10:52.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9676.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9676.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9676.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9676.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 28 21:11:07.155: INFO: File wheezy_udp@dns-test-service-3.dns-9676.svc.cluster.local from pod dns-9676/dns-test-e22353a6-3029-41b1-aea3-fdd67d538fa1 contains '' instead of 'foo.example.com.' Dec 28 21:11:07.162: INFO: File jessie_udp@dns-test-service-3.dns-9676.svc.cluster.local from pod dns-9676/dns-test-e22353a6-3029-41b1-aea3-fdd67d538fa1 contains '' instead of 'foo.example.com.' Dec 28 21:11:07.162: INFO: Lookups using dns-9676/dns-test-e22353a6-3029-41b1-aea3-fdd67d538fa1 failed for: [wheezy_udp@dns-test-service-3.dns-9676.svc.cluster.local jessie_udp@dns-test-service-3.dns-9676.svc.cluster.local] Dec 28 21:11:12.211: INFO: DNS probes using dns-test-e22353a6-3029-41b1-aea3-fdd67d538fa1 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9676.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9676.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9676.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9676.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 28 21:11:24.556: INFO: File wheezy_udp@dns-test-service-3.dns-9676.svc.cluster.local from pod dns-9676/dns-test-451f212b-9745-4689-ae3e-42acf7ae21dc contains '' instead of 'bar.example.com.' Dec 28 21:11:24.563: INFO: File jessie_udp@dns-test-service-3.dns-9676.svc.cluster.local from pod dns-9676/dns-test-451f212b-9745-4689-ae3e-42acf7ae21dc contains '' instead of 'bar.example.com.' Dec 28 21:11:24.563: INFO: Lookups using dns-9676/dns-test-451f212b-9745-4689-ae3e-42acf7ae21dc failed for: [wheezy_udp@dns-test-service-3.dns-9676.svc.cluster.local jessie_udp@dns-test-service-3.dns-9676.svc.cluster.local] Dec 28 21:11:29.590: INFO: File wheezy_udp@dns-test-service-3.dns-9676.svc.cluster.local from pod dns-9676/dns-test-451f212b-9745-4689-ae3e-42acf7ae21dc contains 'foo.example.com. ' instead of 'bar.example.com.' Dec 28 21:11:29.602: INFO: File jessie_udp@dns-test-service-3.dns-9676.svc.cluster.local from pod dns-9676/dns-test-451f212b-9745-4689-ae3e-42acf7ae21dc contains 'foo.example.com. ' instead of 'bar.example.com.' Dec 28 21:11:29.602: INFO: Lookups using dns-9676/dns-test-451f212b-9745-4689-ae3e-42acf7ae21dc failed for: [wheezy_udp@dns-test-service-3.dns-9676.svc.cluster.local jessie_udp@dns-test-service-3.dns-9676.svc.cluster.local] Dec 28 21:11:34.590: INFO: File wheezy_udp@dns-test-service-3.dns-9676.svc.cluster.local from pod dns-9676/dns-test-451f212b-9745-4689-ae3e-42acf7ae21dc contains 'foo.example.com. ' instead of 'bar.example.com.' Dec 28 21:11:34.604: INFO: File jessie_udp@dns-test-service-3.dns-9676.svc.cluster.local from pod dns-9676/dns-test-451f212b-9745-4689-ae3e-42acf7ae21dc contains 'foo.example.com. ' instead of 'bar.example.com.' Dec 28 21:11:34.604: INFO: Lookups using dns-9676/dns-test-451f212b-9745-4689-ae3e-42acf7ae21dc failed for: [wheezy_udp@dns-test-service-3.dns-9676.svc.cluster.local jessie_udp@dns-test-service-3.dns-9676.svc.cluster.local] Dec 28 21:11:39.578: INFO: File wheezy_udp@dns-test-service-3.dns-9676.svc.cluster.local from pod dns-9676/dns-test-451f212b-9745-4689-ae3e-42acf7ae21dc contains 'foo.example.com. ' instead of 'bar.example.com.' Dec 28 21:11:39.586: INFO: File jessie_udp@dns-test-service-3.dns-9676.svc.cluster.local from pod dns-9676/dns-test-451f212b-9745-4689-ae3e-42acf7ae21dc contains 'foo.example.com. ' instead of 'bar.example.com.' Dec 28 21:11:39.586: INFO: Lookups using dns-9676/dns-test-451f212b-9745-4689-ae3e-42acf7ae21dc failed for: [wheezy_udp@dns-test-service-3.dns-9676.svc.cluster.local jessie_udp@dns-test-service-3.dns-9676.svc.cluster.local] Dec 28 21:11:44.608: INFO: DNS probes using dns-test-451f212b-9745-4689-ae3e-42acf7ae21dc succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9676.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-9676.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9676.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-9676.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 28 21:11:57.214: INFO: File wheezy_udp@dns-test-service-3.dns-9676.svc.cluster.local from pod dns-9676/dns-test-51596a84-c0a0-4190-8e8d-d7f868a7f375 contains '' instead of '10.105.4.53' Dec 28 21:11:57.220: INFO: File jessie_udp@dns-test-service-3.dns-9676.svc.cluster.local from pod dns-9676/dns-test-51596a84-c0a0-4190-8e8d-d7f868a7f375 contains '' instead of '10.105.4.53' Dec 28 21:11:57.220: INFO: Lookups using dns-9676/dns-test-51596a84-c0a0-4190-8e8d-d7f868a7f375 failed for: [wheezy_udp@dns-test-service-3.dns-9676.svc.cluster.local jessie_udp@dns-test-service-3.dns-9676.svc.cluster.local] Dec 28 21:12:02.230: INFO: DNS probes using dns-test-51596a84-c0a0-4190-8e8d-d7f868a7f375 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:12:02.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9676" for this suite. • [SLOW TEST:69.542 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":7,"skipped":59,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:12:02.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Dec 28 21:12:02.514: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:12:21.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8884" for this suite. • [SLOW TEST:19.170 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":8,"skipped":140,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:12:21.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:12:28.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3403" for this suite. STEP: Destroying namespace "nsdeletetest-1602" for this suite. Dec 28 21:12:28.042: INFO: Namespace nsdeletetest-1602 was already deleted STEP: Destroying namespace "nsdeletetest-6637" for this suite. • [SLOW TEST:6.472 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":9,"skipped":193,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:12:28.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1768 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Dec 28 21:12:28.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-6240' Dec 28 21:12:30.140: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 28 21:12:30.140: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1773 Dec 28 21:12:30.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-6240' Dec 28 21:12:30.453: INFO: stderr: "" Dec 28 21:12:30.453: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:12:30.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6240" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":10,"skipped":197,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:12:30.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-61155297-ecad-4b19-984e-935e9af4cd62 STEP: Creating a pod to test consume secrets Dec 28 21:12:30.911: INFO: Waiting up to 5m0s for pod "pod-secrets-de1a7376-8834-4d04-b58b-7469bb585937" in namespace "secrets-492" to be "success or failure" Dec 28 21:12:30.930: INFO: Pod "pod-secrets-de1a7376-8834-4d04-b58b-7469bb585937": Phase="Pending", Reason="", readiness=false. Elapsed: 18.92659ms Dec 28 21:12:32.944: INFO: Pod "pod-secrets-de1a7376-8834-4d04-b58b-7469bb585937": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032608042s Dec 28 21:12:34.956: INFO: Pod "pod-secrets-de1a7376-8834-4d04-b58b-7469bb585937": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044499166s Dec 28 21:12:36.977: INFO: Pod "pod-secrets-de1a7376-8834-4d04-b58b-7469bb585937": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06534102s Dec 28 21:12:38.989: INFO: Pod "pod-secrets-de1a7376-8834-4d04-b58b-7469bb585937": Phase="Pending", Reason="", readiness=false. Elapsed: 8.077151807s Dec 28 21:12:41.003: INFO: Pod "pod-secrets-de1a7376-8834-4d04-b58b-7469bb585937": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.091096861s STEP: Saw pod success Dec 28 21:12:41.003: INFO: Pod "pod-secrets-de1a7376-8834-4d04-b58b-7469bb585937" satisfied condition "success or failure" Dec 28 21:12:41.009: INFO: Trying to get logs from node jerma-node pod pod-secrets-de1a7376-8834-4d04-b58b-7469bb585937 container secret-volume-test: STEP: delete the pod Dec 28 21:12:41.093: INFO: Waiting for pod pod-secrets-de1a7376-8834-4d04-b58b-7469bb585937 to disappear Dec 28 21:12:41.172: INFO: Pod pod-secrets-de1a7376-8834-4d04-b58b-7469bb585937 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:12:41.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-492" for this suite. STEP: Destroying namespace "secret-namespace-7310" for this suite. • [SLOW TEST:10.712 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":11,"skipped":252,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:12:41.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-6938/configmap-test-bbfbdf89-1510-40b2-81c9-c8345975b479 STEP: Creating a pod to test consume configMaps Dec 28 21:12:41.385: INFO: Waiting up to 5m0s for pod "pod-configmaps-58bbe557-c0e8-41a2-b7f7-895fa6faa8eb" in namespace "configmap-6938" to be "success or failure" Dec 28 21:12:41.460: INFO: Pod "pod-configmaps-58bbe557-c0e8-41a2-b7f7-895fa6faa8eb": Phase="Pending", Reason="", readiness=false. Elapsed: 74.533563ms Dec 28 21:12:43.482: INFO: Pod "pod-configmaps-58bbe557-c0e8-41a2-b7f7-895fa6faa8eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096102841s Dec 28 21:12:45.517: INFO: Pod "pod-configmaps-58bbe557-c0e8-41a2-b7f7-895fa6faa8eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.131205837s Dec 28 21:12:47.528: INFO: Pod "pod-configmaps-58bbe557-c0e8-41a2-b7f7-895fa6faa8eb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.142387754s Dec 28 21:12:49.544: INFO: Pod "pod-configmaps-58bbe557-c0e8-41a2-b7f7-895fa6faa8eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.158820289s STEP: Saw pod success Dec 28 21:12:49.545: INFO: Pod "pod-configmaps-58bbe557-c0e8-41a2-b7f7-895fa6faa8eb" satisfied condition "success or failure" Dec 28 21:12:49.551: INFO: Trying to get logs from node jerma-node pod pod-configmaps-58bbe557-c0e8-41a2-b7f7-895fa6faa8eb container env-test: STEP: delete the pod Dec 28 21:12:49.603: INFO: Waiting for pod pod-configmaps-58bbe557-c0e8-41a2-b7f7-895fa6faa8eb to disappear Dec 28 21:12:49.718: INFO: Pod pod-configmaps-58bbe557-c0e8-41a2-b7f7-895fa6faa8eb no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:12:49.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6938" for this suite. • [SLOW TEST:8.539 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":12,"skipped":284,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:12:49.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Dec 28 21:12:49.815: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:13:08.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1553" for this suite. • [SLOW TEST:18.605 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":13,"skipped":300,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:13:08.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Dec 28 21:13:17.132: INFO: Successfully updated pod "labelsupdate14686242-4572-43d9-af57-cd6bf6cdc004" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:13:19.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3970" for this suite. • [SLOW TEST:10.863 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":14,"skipped":302,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:13:19.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token Dec 28 21:13:19.968: INFO: created pod pod-service-account-defaultsa Dec 28 21:13:19.969: INFO: pod pod-service-account-defaultsa service account token volume mount: true Dec 28 21:13:20.003: INFO: created pod pod-service-account-mountsa Dec 28 21:13:20.004: INFO: pod pod-service-account-mountsa service account token volume mount: true Dec 28 21:13:20.096: INFO: created pod pod-service-account-nomountsa Dec 28 21:13:20.097: INFO: pod pod-service-account-nomountsa service account token volume mount: false Dec 28 21:13:20.123: INFO: created pod pod-service-account-defaultsa-mountspec Dec 28 21:13:20.124: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Dec 28 21:13:20.186: INFO: created pod pod-service-account-mountsa-mountspec Dec 28 21:13:20.186: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Dec 28 21:13:20.267: INFO: created pod pod-service-account-nomountsa-mountspec Dec 28 21:13:20.267: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Dec 28 21:13:20.320: INFO: created pod pod-service-account-defaultsa-nomountspec Dec 28 21:13:20.321: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Dec 28 21:13:20.405: INFO: created pod pod-service-account-mountsa-nomountspec Dec 28 21:13:20.405: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Dec 28 21:13:20.449: INFO: created pod pod-service-account-nomountsa-nomountspec Dec 28 21:13:20.449: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:13:20.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9737" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":15,"skipped":358,"failed":0} SSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:13:22.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:13:24.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9310" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":16,"skipped":361,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:13:24.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Dec 28 21:13:26.706: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3234 /api/v1/namespaces/watch-3234/configmaps/e2e-watch-test-watch-closed 9d5f5ed4-2785-435c-8d78-15c1471bbe5d 10423100 0 2019-12-28 21:13:26 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 28 21:13:26.707: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3234 /api/v1/namespaces/watch-3234/configmaps/e2e-watch-test-watch-closed 9d5f5ed4-2785-435c-8d78-15c1471bbe5d 10423102 0 2019-12-28 21:13:26 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Dec 28 21:13:27.538: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3234 /api/v1/namespaces/watch-3234/configmaps/e2e-watch-test-watch-closed 9d5f5ed4-2785-435c-8d78-15c1471bbe5d 10423105 0 2019-12-28 21:13:26 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 28 21:13:27.538: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3234 /api/v1/namespaces/watch-3234/configmaps/e2e-watch-test-watch-closed 9d5f5ed4-2785-435c-8d78-15c1471bbe5d 10423109 0 2019-12-28 21:13:26 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:13:27.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3234" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":17,"skipped":370,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:13:27.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-1effe0eb-7be5-4036-bc79-441e52616cd1 STEP: Creating a pod to test consume configMaps Dec 28 21:13:28.886: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1108d21b-efc2-40b6-b1ac-aed4abccb719" in namespace "projected-6340" to be "success or failure" Dec 28 21:13:29.057: INFO: Pod "pod-projected-configmaps-1108d21b-efc2-40b6-b1ac-aed4abccb719": Phase="Pending", Reason="", readiness=false. Elapsed: 171.04131ms Dec 28 21:13:31.676: INFO: Pod "pod-projected-configmaps-1108d21b-efc2-40b6-b1ac-aed4abccb719": Phase="Pending", Reason="", readiness=false. Elapsed: 2.789943465s Dec 28 21:13:35.786: INFO: Pod "pod-projected-configmaps-1108d21b-efc2-40b6-b1ac-aed4abccb719": Phase="Pending", Reason="", readiness=false. Elapsed: 6.899681837s Dec 28 21:13:38.503: INFO: Pod "pod-projected-configmaps-1108d21b-efc2-40b6-b1ac-aed4abccb719": Phase="Pending", Reason="", readiness=false. Elapsed: 9.617232981s Dec 28 21:13:40.776: INFO: Pod "pod-projected-configmaps-1108d21b-efc2-40b6-b1ac-aed4abccb719": Phase="Pending", Reason="", readiness=false. Elapsed: 11.890358222s Dec 28 21:13:42.887: INFO: Pod "pod-projected-configmaps-1108d21b-efc2-40b6-b1ac-aed4abccb719": Phase="Pending", Reason="", readiness=false. Elapsed: 14.001016824s Dec 28 21:13:44.892: INFO: Pod "pod-projected-configmaps-1108d21b-efc2-40b6-b1ac-aed4abccb719": Phase="Pending", Reason="", readiness=false. Elapsed: 16.006172795s Dec 28 21:13:47.300: INFO: Pod "pod-projected-configmaps-1108d21b-efc2-40b6-b1ac-aed4abccb719": Phase="Pending", Reason="", readiness=false. Elapsed: 18.414468929s Dec 28 21:13:49.537: INFO: Pod "pod-projected-configmaps-1108d21b-efc2-40b6-b1ac-aed4abccb719": Phase="Pending", Reason="", readiness=false. Elapsed: 20.650852012s Dec 28 21:13:52.702: INFO: Pod "pod-projected-configmaps-1108d21b-efc2-40b6-b1ac-aed4abccb719": Phase="Pending", Reason="", readiness=false. Elapsed: 23.815720802s Dec 28 21:13:54.757: INFO: Pod "pod-projected-configmaps-1108d21b-efc2-40b6-b1ac-aed4abccb719": Phase="Pending", Reason="", readiness=false. Elapsed: 25.870783917s Dec 28 21:13:56.767: INFO: Pod "pod-projected-configmaps-1108d21b-efc2-40b6-b1ac-aed4abccb719": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.880880015s STEP: Saw pod success Dec 28 21:13:56.767: INFO: Pod "pod-projected-configmaps-1108d21b-efc2-40b6-b1ac-aed4abccb719" satisfied condition "success or failure" Dec 28 21:13:56.771: INFO: Trying to get logs from node jerma-server-4b75xjbddvit pod pod-projected-configmaps-1108d21b-efc2-40b6-b1ac-aed4abccb719 container projected-configmap-volume-test: STEP: delete the pod Dec 28 21:13:56.973: INFO: Waiting for pod pod-projected-configmaps-1108d21b-efc2-40b6-b1ac-aed4abccb719 to disappear Dec 28 21:13:56.983: INFO: Pod pod-projected-configmaps-1108d21b-efc2-40b6-b1ac-aed4abccb719 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:13:56.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6340" for this suite. • [SLOW TEST:29.360 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":382,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:13:57.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:14:14.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5102" for this suite. • [SLOW TEST:17.794 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":19,"skipped":383,"failed":0} SSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:14:14.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 28 21:14:14.986: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Dec 28 21:14:17.359: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:14:18.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2504" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":20,"skipped":391,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:14:18.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-ce52c402-8f8c-429b-b536-294bd4cbf996 Dec 28 21:14:19.104: INFO: Pod name my-hostname-basic-ce52c402-8f8c-429b-b536-294bd4cbf996: Found 0 pods out of 1 Dec 28 21:14:24.125: INFO: Pod name my-hostname-basic-ce52c402-8f8c-429b-b536-294bd4cbf996: Found 1 pods out of 1 Dec 28 21:14:24.126: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-ce52c402-8f8c-429b-b536-294bd4cbf996" are running Dec 28 21:14:32.154: INFO: Pod "my-hostname-basic-ce52c402-8f8c-429b-b536-294bd4cbf996-4s74x" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 21:14:19 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 21:14:19 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-ce52c402-8f8c-429b-b536-294bd4cbf996]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 21:14:19 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-ce52c402-8f8c-429b-b536-294bd4cbf996]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 21:14:19 +0000 UTC Reason: Message:}]) Dec 28 21:14:32.154: INFO: Trying to dial the pod Dec 28 21:14:37.175: INFO: Controller my-hostname-basic-ce52c402-8f8c-429b-b536-294bd4cbf996: Got expected result from replica 1 [my-hostname-basic-ce52c402-8f8c-429b-b536-294bd4cbf996-4s74x]: "my-hostname-basic-ce52c402-8f8c-429b-b536-294bd4cbf996-4s74x", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:14:37.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6580" for this suite. • [SLOW TEST:18.385 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":21,"skipped":409,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:14:37.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Dec 28 21:14:43.784: INFO: 0 pods remaining Dec 28 21:14:43.784: INFO: 0 pods has nil DeletionTimestamp Dec 28 21:14:43.784: INFO: STEP: Gathering metrics W1228 21:14:44.651934 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 28 21:14:44.652: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:14:44.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3363" for this suite. • [SLOW TEST:7.490 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":22,"skipped":411,"failed":0} SSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:14:44.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments Dec 28 21:14:45.102: INFO: Waiting up to 5m0s for pod "client-containers-f29ca444-ce4c-4bb0-9ec0-6b63f17c17a6" in namespace "containers-199" to be "success or failure" Dec 28 21:14:45.242: INFO: Pod "client-containers-f29ca444-ce4c-4bb0-9ec0-6b63f17c17a6": Phase="Pending", Reason="", readiness=false. Elapsed: 140.460519ms Dec 28 21:14:47.760: INFO: Pod "client-containers-f29ca444-ce4c-4bb0-9ec0-6b63f17c17a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.658319497s Dec 28 21:14:49.772: INFO: Pod "client-containers-f29ca444-ce4c-4bb0-9ec0-6b63f17c17a6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.670559303s Dec 28 21:14:51.786: INFO: Pod "client-containers-f29ca444-ce4c-4bb0-9ec0-6b63f17c17a6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.68395734s Dec 28 21:14:53.797: INFO: Pod "client-containers-f29ca444-ce4c-4bb0-9ec0-6b63f17c17a6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.695549682s Dec 28 21:14:55.805: INFO: Pod "client-containers-f29ca444-ce4c-4bb0-9ec0-6b63f17c17a6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.702970687s Dec 28 21:14:57.816: INFO: Pod "client-containers-f29ca444-ce4c-4bb0-9ec0-6b63f17c17a6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.713839366s Dec 28 21:14:59.827: INFO: Pod "client-containers-f29ca444-ce4c-4bb0-9ec0-6b63f17c17a6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.725005816s Dec 28 21:15:01.841: INFO: Pod "client-containers-f29ca444-ce4c-4bb0-9ec0-6b63f17c17a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.739351103s STEP: Saw pod success Dec 28 21:15:01.841: INFO: Pod "client-containers-f29ca444-ce4c-4bb0-9ec0-6b63f17c17a6" satisfied condition "success or failure" Dec 28 21:15:01.848: INFO: Trying to get logs from node jerma-node pod client-containers-f29ca444-ce4c-4bb0-9ec0-6b63f17c17a6 container test-container: STEP: delete the pod Dec 28 21:15:01.931: INFO: Waiting for pod client-containers-f29ca444-ce4c-4bb0-9ec0-6b63f17c17a6 to disappear Dec 28 21:15:01.935: INFO: Pod client-containers-f29ca444-ce4c-4bb0-9ec0-6b63f17c17a6 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:15:01.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-199" for this suite. • [SLOW TEST:17.278 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":417,"failed":0} SSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:15:01.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod Dec 28 21:15:10.144: INFO: Pod pod-hostip-61a58b2e-e633-42aa-be9b-a68b6969a4be has hostIP: 10.96.2.170 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:15:10.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-871" for this suite. • [SLOW TEST:8.190 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":24,"skipped":424,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:15:10.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-48dd2c85-d6cf-4eac-931f-7dc157b3c89b STEP: Creating a pod to test consume secrets Dec 28 21:15:10.417: INFO: Waiting up to 5m0s for pod "pod-secrets-8f421332-9311-4cb8-ab97-ebed21fc152b" in namespace "secrets-6629" to be "success or failure" Dec 28 21:15:10.431: INFO: Pod "pod-secrets-8f421332-9311-4cb8-ab97-ebed21fc152b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.36288ms Dec 28 21:15:12.449: INFO: Pod "pod-secrets-8f421332-9311-4cb8-ab97-ebed21fc152b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031809038s Dec 28 21:15:14.459: INFO: Pod "pod-secrets-8f421332-9311-4cb8-ab97-ebed21fc152b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042501925s Dec 28 21:15:16.522: INFO: Pod "pod-secrets-8f421332-9311-4cb8-ab97-ebed21fc152b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.105331654s Dec 28 21:15:18.538: INFO: Pod "pod-secrets-8f421332-9311-4cb8-ab97-ebed21fc152b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.121268254s STEP: Saw pod success Dec 28 21:15:18.539: INFO: Pod "pod-secrets-8f421332-9311-4cb8-ab97-ebed21fc152b" satisfied condition "success or failure" Dec 28 21:15:18.579: INFO: Trying to get logs from node jerma-node pod pod-secrets-8f421332-9311-4cb8-ab97-ebed21fc152b container secret-volume-test: STEP: delete the pod Dec 28 21:15:18.834: INFO: Waiting for pod pod-secrets-8f421332-9311-4cb8-ab97-ebed21fc152b to disappear Dec 28 21:15:18.838: INFO: Pod pod-secrets-8f421332-9311-4cb8-ab97-ebed21fc152b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:15:18.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6629" for this suite. • [SLOW TEST:8.693 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":25,"skipped":432,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:15:18.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-e0e7bcd2-9287-4b7e-85bd-a922a214150b STEP: Creating a pod to test consume configMaps Dec 28 21:15:19.009: INFO: Waiting up to 5m0s for pod "pod-configmaps-345aca86-279c-4269-b8fb-0e622a3f10c6" in namespace "configmap-8172" to be "success or failure" Dec 28 21:15:19.013: INFO: Pod "pod-configmaps-345aca86-279c-4269-b8fb-0e622a3f10c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.152717ms Dec 28 21:15:21.020: INFO: Pod "pod-configmaps-345aca86-279c-4269-b8fb-0e622a3f10c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010833002s Dec 28 21:15:23.026: INFO: Pod "pod-configmaps-345aca86-279c-4269-b8fb-0e622a3f10c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017413449s Dec 28 21:15:25.033: INFO: Pod "pod-configmaps-345aca86-279c-4269-b8fb-0e622a3f10c6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.023969248s Dec 28 21:15:27.042: INFO: Pod "pod-configmaps-345aca86-279c-4269-b8fb-0e622a3f10c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.033512352s STEP: Saw pod success Dec 28 21:15:27.042: INFO: Pod "pod-configmaps-345aca86-279c-4269-b8fb-0e622a3f10c6" satisfied condition "success or failure" Dec 28 21:15:27.046: INFO: Trying to get logs from node jerma-node pod pod-configmaps-345aca86-279c-4269-b8fb-0e622a3f10c6 container configmap-volume-test: STEP: delete the pod Dec 28 21:15:27.082: INFO: Waiting for pod pod-configmaps-345aca86-279c-4269-b8fb-0e622a3f10c6 to disappear Dec 28 21:15:27.088: INFO: Pod pod-configmaps-345aca86-279c-4269-b8fb-0e622a3f10c6 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:15:27.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8172" for this suite. • [SLOW TEST:8.288 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":26,"skipped":437,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:15:27.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Dec 28 21:15:27.217: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Dec 28 21:15:27.264: INFO: Waiting for terminating namespaces to be deleted... Dec 28 21:15:27.269: INFO: Logging pods the kubelet thinks is on node jerma-node before test Dec 28 21:15:27.282: INFO: weave-net-srfjj from kube-system started at 2019-12-17 21:23:16 +0000 UTC (2 container statuses recorded) Dec 28 21:15:27.282: INFO: Container weave ready: true, restart count 0 Dec 28 21:15:27.282: INFO: Container weave-npc ready: true, restart count 0 Dec 28 21:15:27.282: INFO: kube-proxy-jcjl4 from kube-system started at 2019-10-12 13:47:49 +0000 UTC (1 container statuses recorded) Dec 28 21:15:27.282: INFO: Container kube-proxy ready: true, restart count 0 Dec 28 21:15:27.282: INFO: Logging pods the kubelet thinks is on node jerma-server-4b75xjbddvit before test Dec 28 21:15:27.305: INFO: etcd-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:37 +0000 UTC (1 container statuses recorded) Dec 28 21:15:27.305: INFO: Container etcd ready: true, restart count 1 Dec 28 21:15:27.305: INFO: kube-controller-manager-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:40 +0000 UTC (1 container statuses recorded) Dec 28 21:15:27.305: INFO: Container kube-controller-manager ready: true, restart count 13 Dec 28 21:15:27.305: INFO: kube-apiserver-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:38 +0000 UTC (1 container statuses recorded) Dec 28 21:15:27.305: INFO: Container kube-apiserver ready: true, restart count 1 Dec 28 21:15:27.305: INFO: coredns-5644d7b6d9-n9kkw from kube-system started at 2019-11-10 16:39:08 +0000 UTC (0 container statuses recorded) Dec 28 21:15:27.305: INFO: coredns-5644d7b6d9-rqwzj from kube-system started at 2019-11-10 18:03:38 +0000 UTC (0 container statuses recorded) Dec 28 21:15:27.305: INFO: weave-net-gsjjk from kube-system started at 2019-12-13 09:16:56 +0000 UTC (2 container statuses recorded) Dec 28 21:15:27.305: INFO: Container weave ready: true, restart count 0 Dec 28 21:15:27.305: INFO: Container weave-npc ready: true, restart count 0 Dec 28 21:15:27.305: INFO: coredns-5644d7b6d9-9sj58 from kube-system started at 2019-12-14 15:12:12 +0000 UTC (1 container statuses recorded) Dec 28 21:15:27.305: INFO: Container coredns ready: true, restart count 0 Dec 28 21:15:27.305: INFO: kube-scheduler-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:42 +0000 UTC (1 container statuses recorded) Dec 28 21:15:27.305: INFO: Container kube-scheduler ready: true, restart count 16 Dec 28 21:15:27.305: INFO: kube-proxy-bdcvr from kube-system started at 2019-12-13 09:08:20 +0000 UTC (1 container statuses recorded) Dec 28 21:15:27.305: INFO: Container kube-proxy ready: true, restart count 0 Dec 28 21:15:27.305: INFO: coredns-5644d7b6d9-xvlxj from kube-system started at 2019-12-14 16:49:52 +0000 UTC (1 container statuses recorded) Dec 28 21:15:27.305: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-node STEP: verifying the node has the label node jerma-server-4b75xjbddvit Dec 28 21:15:27.456: INFO: Pod coredns-5644d7b6d9-9sj58 requesting resource cpu=100m on Node jerma-server-4b75xjbddvit Dec 28 21:15:27.456: INFO: Pod coredns-5644d7b6d9-xvlxj requesting resource cpu=100m on Node jerma-server-4b75xjbddvit Dec 28 21:15:27.456: INFO: Pod etcd-jerma-server-4b75xjbddvit requesting resource cpu=0m on Node jerma-server-4b75xjbddvit Dec 28 21:15:27.456: INFO: Pod kube-apiserver-jerma-server-4b75xjbddvit requesting resource cpu=250m on Node jerma-server-4b75xjbddvit Dec 28 21:15:27.456: INFO: Pod kube-controller-manager-jerma-server-4b75xjbddvit requesting resource cpu=200m on Node jerma-server-4b75xjbddvit Dec 28 21:15:27.456: INFO: Pod kube-proxy-bdcvr requesting resource cpu=0m on Node jerma-server-4b75xjbddvit Dec 28 21:15:27.456: INFO: Pod kube-proxy-jcjl4 requesting resource cpu=0m on Node jerma-node Dec 28 21:15:27.456: INFO: Pod kube-scheduler-jerma-server-4b75xjbddvit requesting resource cpu=100m on Node jerma-server-4b75xjbddvit Dec 28 21:15:27.456: INFO: Pod weave-net-gsjjk requesting resource cpu=20m on Node jerma-server-4b75xjbddvit Dec 28 21:15:27.456: INFO: Pod weave-net-srfjj requesting resource cpu=20m on Node jerma-node STEP: Starting Pods to consume most of the cluster CPU. Dec 28 21:15:27.456: INFO: Creating a pod which consumes cpu=2786m on Node jerma-node Dec 28 21:15:27.473: INFO: Creating a pod which consumes cpu=2261m on Node jerma-server-4b75xjbddvit STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-310be518-3822-41d8-8167-349f0388e301.15e4a57d666cb59d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2723/filler-pod-310be518-3822-41d8-8167-349f0388e301 to jerma-node] STEP: Considering event: Type = [Normal], Name = [filler-pod-310be518-3822-41d8-8167-349f0388e301.15e4a57e4e2f08b5], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-310be518-3822-41d8-8167-349f0388e301.15e4a57f181d2572], Reason = [Created], Message = [Created container filler-pod-310be518-3822-41d8-8167-349f0388e301] STEP: Considering event: Type = [Normal], Name = [filler-pod-310be518-3822-41d8-8167-349f0388e301.15e4a57f3f90ba3c], Reason = [Started], Message = [Started container filler-pod-310be518-3822-41d8-8167-349f0388e301] STEP: Considering event: Type = [Normal], Name = [filler-pod-8e4d1c66-3a07-4570-ac33-586fa3d2242e.15e4a57d678728bf], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2723/filler-pod-8e4d1c66-3a07-4570-ac33-586fa3d2242e to jerma-server-4b75xjbddvit] STEP: Considering event: Type = [Normal], Name = [filler-pod-8e4d1c66-3a07-4570-ac33-586fa3d2242e.15e4a57e5c3a153c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-8e4d1c66-3a07-4570-ac33-586fa3d2242e.15e4a57f11750e15], Reason = [Created], Message = [Created container filler-pod-8e4d1c66-3a07-4570-ac33-586fa3d2242e] STEP: Considering event: Type = [Normal], Name = [filler-pod-8e4d1c66-3a07-4570-ac33-586fa3d2242e.15e4a57f5483dc16], Reason = [Started], Message = [Started container filler-pod-8e4d1c66-3a07-4570-ac33-586fa3d2242e] STEP: Considering event: Type = [Warning], Name = [additional-pod.15e4a57fbe515f31], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] STEP: removing the label node off the node jerma-node STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-server-4b75xjbddvit STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:15:38.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2723" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:11.581 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":27,"skipped":460,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:15:38.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1576 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Dec 28 21:15:38.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-2702' Dec 28 21:15:38.977: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 28 21:15:38.978: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1582 Dec 28 21:15:40.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-2702' Dec 28 21:15:41.336: INFO: stderr: "" Dec 28 21:15:41.336: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:15:41.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2702" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":28,"skipped":471,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:15:41.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-819f4ba4-a390-47fb-b26e-b44c007941c1 STEP: Creating a pod to test consume configMaps Dec 28 21:15:41.499: INFO: Waiting up to 5m0s for pod "pod-configmaps-095e791c-b834-463b-a1ca-e5c102ee6445" in namespace "configmap-7958" to be "success or failure" Dec 28 21:15:41.514: INFO: Pod "pod-configmaps-095e791c-b834-463b-a1ca-e5c102ee6445": Phase="Pending", Reason="", readiness=false. Elapsed: 14.54788ms Dec 28 21:15:43.530: INFO: Pod "pod-configmaps-095e791c-b834-463b-a1ca-e5c102ee6445": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030234746s Dec 28 21:15:45.916: INFO: Pod "pod-configmaps-095e791c-b834-463b-a1ca-e5c102ee6445": Phase="Pending", Reason="", readiness=false. Elapsed: 4.416496764s Dec 28 21:15:47.940: INFO: Pod "pod-configmaps-095e791c-b834-463b-a1ca-e5c102ee6445": Phase="Pending", Reason="", readiness=false. Elapsed: 6.440303569s Dec 28 21:15:49.957: INFO: Pod "pod-configmaps-095e791c-b834-463b-a1ca-e5c102ee6445": Phase="Pending", Reason="", readiness=false. Elapsed: 8.457582272s Dec 28 21:15:51.975: INFO: Pod "pod-configmaps-095e791c-b834-463b-a1ca-e5c102ee6445": Phase="Pending", Reason="", readiness=false. Elapsed: 10.475221088s Dec 28 21:15:53.985: INFO: Pod "pod-configmaps-095e791c-b834-463b-a1ca-e5c102ee6445": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.48538586s STEP: Saw pod success Dec 28 21:15:53.985: INFO: Pod "pod-configmaps-095e791c-b834-463b-a1ca-e5c102ee6445" satisfied condition "success or failure" Dec 28 21:15:53.990: INFO: Trying to get logs from node jerma-node pod pod-configmaps-095e791c-b834-463b-a1ca-e5c102ee6445 container configmap-volume-test: STEP: delete the pod Dec 28 21:15:54.059: INFO: Waiting for pod pod-configmaps-095e791c-b834-463b-a1ca-e5c102ee6445 to disappear Dec 28 21:15:54.067: INFO: Pod pod-configmaps-095e791c-b834-463b-a1ca-e5c102ee6445 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:15:54.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7958" for this suite. • [SLOW TEST:12.768 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":29,"skipped":472,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:15:54.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-3232d451-c0b9-49f5-9d71-7b5944be7205 STEP: Creating secret with name s-test-opt-upd-f98e3179-5ffc-4dd3-b8b5-69ddb7d2e877 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-3232d451-c0b9-49f5-9d71-7b5944be7205 STEP: Updating secret s-test-opt-upd-f98e3179-5ffc-4dd3-b8b5-69ddb7d2e877 STEP: Creating secret with name s-test-opt-create-bf657f66-30d3-4d49-9902-5b1a0a8baee5 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:16:08.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9076" for this suite. • [SLOW TEST:14.437 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":492,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:16:08.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Dec 28 21:16:09.426: INFO: Pod name wrapped-volume-race-7b936d7c-cfc1-46e5-84d7-24a104789f5a: Found 0 pods out of 5 Dec 28 21:16:14.477: INFO: Pod name wrapped-volume-race-7b936d7c-cfc1-46e5-84d7-24a104789f5a: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-7b936d7c-cfc1-46e5-84d7-24a104789f5a in namespace emptydir-wrapper-5452, will wait for the garbage collector to delete the pods Dec 28 21:16:44.633: INFO: Deleting ReplicationController wrapped-volume-race-7b936d7c-cfc1-46e5-84d7-24a104789f5a took: 55.690188ms Dec 28 21:16:44.933: INFO: Terminating ReplicationController wrapped-volume-race-7b936d7c-cfc1-46e5-84d7-24a104789f5a pods took: 300.625598ms STEP: Creating RC which spawns configmap-volume pods Dec 28 21:17:07.142: INFO: Pod name wrapped-volume-race-f3de03e0-ffb8-4573-ab4c-cacecbe64f9f: Found 0 pods out of 5 Dec 28 21:17:12.178: INFO: Pod name wrapped-volume-race-f3de03e0-ffb8-4573-ab4c-cacecbe64f9f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f3de03e0-ffb8-4573-ab4c-cacecbe64f9f in namespace emptydir-wrapper-5452, will wait for the garbage collector to delete the pods Dec 28 21:17:40.289: INFO: Deleting ReplicationController wrapped-volume-race-f3de03e0-ffb8-4573-ab4c-cacecbe64f9f took: 17.083727ms Dec 28 21:17:40.689: INFO: Terminating ReplicationController wrapped-volume-race-f3de03e0-ffb8-4573-ab4c-cacecbe64f9f pods took: 400.661433ms STEP: Creating RC which spawns configmap-volume pods Dec 28 21:17:57.724: INFO: Pod name wrapped-volume-race-941eca55-6529-4f3e-b228-40637ad1debb: Found 0 pods out of 5 Dec 28 21:18:02.738: INFO: Pod name wrapped-volume-race-941eca55-6529-4f3e-b228-40637ad1debb: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-941eca55-6529-4f3e-b228-40637ad1debb in namespace emptydir-wrapper-5452, will wait for the garbage collector to delete the pods Dec 28 21:18:26.898: INFO: Deleting ReplicationController wrapped-volume-race-941eca55-6529-4f3e-b228-40637ad1debb took: 11.375016ms Dec 28 21:18:27.499: INFO: Terminating ReplicationController wrapped-volume-race-941eca55-6529-4f3e-b228-40637ad1debb pods took: 600.727993ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:18:48.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5452" for this suite. • [SLOW TEST:159.592 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":31,"skipped":499,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:18:48.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-8781 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-8781 STEP: Deleting pre-stop pod Dec 28 21:19:05.586: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:19:05.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-8781" for this suite. • [SLOW TEST:17.489 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":32,"skipped":511,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:19:05.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:19:14.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9010" for this suite. • [SLOW TEST:9.183 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":33,"skipped":527,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:19:14.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-111b09fb-6bbd-4860-9069-05c7ce0b5786 STEP: Creating a pod to test consume secrets Dec 28 21:19:14.958: INFO: Waiting up to 5m0s for pod "pod-secrets-30ed3798-7aaa-4d33-976e-d4890ea322af" in namespace "secrets-3199" to be "success or failure" Dec 28 21:19:14.968: INFO: Pod "pod-secrets-30ed3798-7aaa-4d33-976e-d4890ea322af": Phase="Pending", Reason="", readiness=false. Elapsed: 9.928501ms Dec 28 21:19:16.975: INFO: Pod "pod-secrets-30ed3798-7aaa-4d33-976e-d4890ea322af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016770025s Dec 28 21:19:18.983: INFO: Pod "pod-secrets-30ed3798-7aaa-4d33-976e-d4890ea322af": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024969494s Dec 28 21:19:20.990: INFO: Pod "pod-secrets-30ed3798-7aaa-4d33-976e-d4890ea322af": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031849843s Dec 28 21:19:22.998: INFO: Pod "pod-secrets-30ed3798-7aaa-4d33-976e-d4890ea322af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.03969738s STEP: Saw pod success Dec 28 21:19:22.998: INFO: Pod "pod-secrets-30ed3798-7aaa-4d33-976e-d4890ea322af" satisfied condition "success or failure" Dec 28 21:19:23.002: INFO: Trying to get logs from node jerma-node pod pod-secrets-30ed3798-7aaa-4d33-976e-d4890ea322af container secret-volume-test: STEP: delete the pod Dec 28 21:19:23.103: INFO: Waiting for pod pod-secrets-30ed3798-7aaa-4d33-976e-d4890ea322af to disappear Dec 28 21:19:23.106: INFO: Pod pod-secrets-30ed3798-7aaa-4d33-976e-d4890ea322af no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:19:23.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3199" for this suite. • [SLOW TEST:8.283 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":34,"skipped":538,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:19:23.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Dec 28 21:19:23.209: INFO: >>> kubeConfig: /root/.kube/config Dec 28 21:19:26.911: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:19:39.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7819" for this suite. • [SLOW TEST:16.185 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":35,"skipped":545,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:19:39.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Dec 28 21:19:39.424: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Dec 28 21:19:39.440: INFO: Waiting for terminating namespaces to be deleted... Dec 28 21:19:39.444: INFO: Logging pods the kubelet thinks is on node jerma-node before test Dec 28 21:19:39.453: INFO: weave-net-srfjj from kube-system started at 2019-12-17 21:23:16 +0000 UTC (2 container statuses recorded) Dec 28 21:19:39.453: INFO: Container weave ready: true, restart count 0 Dec 28 21:19:39.453: INFO: Container weave-npc ready: true, restart count 0 Dec 28 21:19:39.453: INFO: kube-proxy-jcjl4 from kube-system started at 2019-10-12 13:47:49 +0000 UTC (1 container statuses recorded) Dec 28 21:19:39.453: INFO: Container kube-proxy ready: true, restart count 0 Dec 28 21:19:39.453: INFO: Logging pods the kubelet thinks is on node jerma-server-4b75xjbddvit before test Dec 28 21:19:39.474: INFO: kube-scheduler-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:42 +0000 UTC (1 container statuses recorded) Dec 28 21:19:39.474: INFO: Container kube-scheduler ready: true, restart count 16 Dec 28 21:19:39.474: INFO: kube-proxy-bdcvr from kube-system started at 2019-12-13 09:08:20 +0000 UTC (1 container statuses recorded) Dec 28 21:19:39.474: INFO: Container kube-proxy ready: true, restart count 0 Dec 28 21:19:39.474: INFO: coredns-5644d7b6d9-xvlxj from kube-system started at 2019-12-14 16:49:52 +0000 UTC (1 container statuses recorded) Dec 28 21:19:39.474: INFO: Container coredns ready: true, restart count 0 Dec 28 21:19:39.474: INFO: etcd-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:37 +0000 UTC (1 container statuses recorded) Dec 28 21:19:39.474: INFO: Container etcd ready: true, restart count 1 Dec 28 21:19:39.474: INFO: kube-controller-manager-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:40 +0000 UTC (1 container statuses recorded) Dec 28 21:19:39.474: INFO: Container kube-controller-manager ready: true, restart count 13 Dec 28 21:19:39.474: INFO: kube-apiserver-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:38 +0000 UTC (1 container statuses recorded) Dec 28 21:19:39.474: INFO: Container kube-apiserver ready: true, restart count 1 Dec 28 21:19:39.474: INFO: coredns-5644d7b6d9-n9kkw from kube-system started at 2019-11-10 16:39:08 +0000 UTC (0 container statuses recorded) Dec 28 21:19:39.474: INFO: coredns-5644d7b6d9-rqwzj from kube-system started at 2019-11-10 18:03:38 +0000 UTC (0 container statuses recorded) Dec 28 21:19:39.474: INFO: weave-net-gsjjk from kube-system started at 2019-12-13 09:16:56 +0000 UTC (2 container statuses recorded) Dec 28 21:19:39.474: INFO: Container weave ready: true, restart count 0 Dec 28 21:19:39.474: INFO: Container weave-npc ready: true, restart count 0 Dec 28 21:19:39.474: INFO: coredns-5644d7b6d9-9sj58 from kube-system started at 2019-12-14 15:12:12 +0000 UTC (1 container statuses recorded) Dec 28 21:19:39.474: INFO: Container coredns ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-7ab64bb6-d995-4a88-afda-58fc5717d399 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-7ab64bb6-d995-4a88-afda-58fc5717d399 off the node jerma-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-7ab64bb6-d995-4a88-afda-58fc5717d399 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:20:11.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2713" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:32.670 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":36,"skipped":561,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:20:11.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Dec 28 21:20:12.265: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0413b94b-6d51-423d-b2ed-445bfb0a2f92" in namespace "projected-8278" to be "success or failure" Dec 28 21:20:12.371: INFO: Pod "downwardapi-volume-0413b94b-6d51-423d-b2ed-445bfb0a2f92": Phase="Pending", Reason="", readiness=false. Elapsed: 105.593999ms Dec 28 21:20:14.379: INFO: Pod "downwardapi-volume-0413b94b-6d51-423d-b2ed-445bfb0a2f92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113538002s Dec 28 21:20:16.388: INFO: Pod "downwardapi-volume-0413b94b-6d51-423d-b2ed-445bfb0a2f92": Phase="Pending", Reason="", readiness=false. Elapsed: 4.12289106s Dec 28 21:20:18.396: INFO: Pod "downwardapi-volume-0413b94b-6d51-423d-b2ed-445bfb0a2f92": Phase="Pending", Reason="", readiness=false. Elapsed: 6.130458392s Dec 28 21:20:20.407: INFO: Pod "downwardapi-volume-0413b94b-6d51-423d-b2ed-445bfb0a2f92": Phase="Pending", Reason="", readiness=false. Elapsed: 8.141718429s Dec 28 21:20:22.415: INFO: Pod "downwardapi-volume-0413b94b-6d51-423d-b2ed-445bfb0a2f92": Phase="Pending", Reason="", readiness=false. Elapsed: 10.149527986s Dec 28 21:20:24.427: INFO: Pod "downwardapi-volume-0413b94b-6d51-423d-b2ed-445bfb0a2f92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.161741754s STEP: Saw pod success Dec 28 21:20:24.427: INFO: Pod "downwardapi-volume-0413b94b-6d51-423d-b2ed-445bfb0a2f92" satisfied condition "success or failure" Dec 28 21:20:24.433: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-0413b94b-6d51-423d-b2ed-445bfb0a2f92 container client-container: STEP: delete the pod Dec 28 21:20:24.509: INFO: Waiting for pod downwardapi-volume-0413b94b-6d51-423d-b2ed-445bfb0a2f92 to disappear Dec 28 21:20:24.545: INFO: Pod downwardapi-volume-0413b94b-6d51-423d-b2ed-445bfb0a2f92 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:20:24.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8278" for this suite. • [SLOW TEST:12.596 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":37,"skipped":563,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:20:24.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command Dec 28 21:20:24.705: INFO: Waiting up to 5m0s for pod "var-expansion-4c81b2b4-d93c-4427-9b4c-cab225be177e" in namespace "var-expansion-2490" to be "success or failure" Dec 28 21:20:24.758: INFO: Pod "var-expansion-4c81b2b4-d93c-4427-9b4c-cab225be177e": Phase="Pending", Reason="", readiness=false. Elapsed: 52.485483ms Dec 28 21:20:26.775: INFO: Pod "var-expansion-4c81b2b4-d93c-4427-9b4c-cab225be177e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070037736s Dec 28 21:20:29.129: INFO: Pod "var-expansion-4c81b2b4-d93c-4427-9b4c-cab225be177e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.423315288s Dec 28 21:20:31.156: INFO: Pod "var-expansion-4c81b2b4-d93c-4427-9b4c-cab225be177e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.450720388s Dec 28 21:20:33.391: INFO: Pod "var-expansion-4c81b2b4-d93c-4427-9b4c-cab225be177e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.685836225s STEP: Saw pod success Dec 28 21:20:33.391: INFO: Pod "var-expansion-4c81b2b4-d93c-4427-9b4c-cab225be177e" satisfied condition "success or failure" Dec 28 21:20:33.397: INFO: Trying to get logs from node jerma-server-4b75xjbddvit pod var-expansion-4c81b2b4-d93c-4427-9b4c-cab225be177e container dapi-container: STEP: delete the pod Dec 28 21:20:33.667: INFO: Waiting for pod var-expansion-4c81b2b4-d93c-4427-9b4c-cab225be177e to disappear Dec 28 21:20:33.675: INFO: Pod var-expansion-4c81b2b4-d93c-4427-9b4c-cab225be177e no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:20:33.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2490" for this suite. • [SLOW TEST:9.130 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":38,"skipped":595,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:20:33.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-6197 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-6197 STEP: creating replication controller externalsvc in namespace services-6197 I1228 21:20:34.028012 8 runners.go:189] Created replication controller with name: externalsvc, namespace: services-6197, replica count: 2 I1228 21:20:37.079445 8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1228 21:20:40.079995 8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1228 21:20:43.080457 8 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Dec 28 21:20:43.202: INFO: Creating new exec pod Dec 28 21:20:49.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6197 execpodsw9j8 -- /bin/sh -x -c nslookup clusterip-service' Dec 28 21:20:49.984: INFO: stderr: "+ nslookup clusterip-service\n" Dec 28 21:20:49.984: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-6197.svc.cluster.local\tcanonical name = externalsvc.services-6197.svc.cluster.local.\nName:\texternalsvc.services-6197.svc.cluster.local\nAddress: 10.101.198.214\n\n" STEP: deleting ReplicationController externalsvc in namespace services-6197, will wait for the garbage collector to delete the pods Dec 28 21:20:50.054: INFO: Deleting ReplicationController externalsvc took: 11.351494ms Dec 28 21:20:50.555: INFO: Terminating ReplicationController externalsvc pods took: 501.659798ms Dec 28 21:21:06.907: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:21:06.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6197" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:33.282 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":39,"skipped":614,"failed":0} [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:21:06.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-7c512c8f-f439-4fb5-8087-af88a79e1def STEP: Creating configMap with name cm-test-opt-upd-9a67df76-aec8-4a5e-a710-8aa51e72a719 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-7c512c8f-f439-4fb5-8087-af88a79e1def STEP: Updating configmap cm-test-opt-upd-9a67df76-aec8-4a5e-a710-8aa51e72a719 STEP: Creating configMap with name cm-test-opt-create-890fe4e9-f8d2-40c9-b498-20adb8126861 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:22:28.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7809" for this suite. • [SLOW TEST:81.199 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":614,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:22:28.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server Dec 28 21:22:28.943: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:22:29.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8146" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":41,"skipped":641,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:22:29.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 28 21:22:29.166: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Dec 28 21:22:32.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6174 create -f -' Dec 28 21:22:35.956: INFO: stderr: "" Dec 28 21:22:35.956: INFO: stdout: "e2e-test-crd-publish-openapi-5442-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Dec 28 21:22:35.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6174 delete e2e-test-crd-publish-openapi-5442-crds test-cr' Dec 28 21:22:36.151: INFO: stderr: "" Dec 28 21:22:36.151: INFO: stdout: "e2e-test-crd-publish-openapi-5442-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Dec 28 21:22:36.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6174 apply -f -' Dec 28 21:22:36.558: INFO: stderr: "" Dec 28 21:22:36.558: INFO: stdout: "e2e-test-crd-publish-openapi-5442-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Dec 28 21:22:36.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6174 delete e2e-test-crd-publish-openapi-5442-crds test-cr' Dec 28 21:22:36.784: INFO: stderr: "" Dec 28 21:22:36.784: INFO: stdout: "e2e-test-crd-publish-openapi-5442-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Dec 28 21:22:36.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5442-crds' Dec 28 21:22:37.139: INFO: stderr: "" Dec 28 21:22:37.139: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5442-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:22:40.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6174" for this suite. • [SLOW TEST:10.955 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":42,"skipped":642,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:22:40.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-86a4b5df-33f8-4416-a5bd-ac3f32586a1b STEP: Creating secret with name secret-projected-all-test-volume-d9a8c2d9-970b-4480-bf30-02ae87817f40 STEP: Creating a pod to test Check all projections for projected volume plugin Dec 28 21:22:40.254: INFO: Waiting up to 5m0s for pod "projected-volume-a88f24f9-04bf-470f-8be5-b6ef9afa500b" in namespace "projected-5707" to be "success or failure" Dec 28 21:22:40.313: INFO: Pod "projected-volume-a88f24f9-04bf-470f-8be5-b6ef9afa500b": Phase="Pending", Reason="", readiness=false. Elapsed: 58.918204ms Dec 28 21:22:42.322: INFO: Pod "projected-volume-a88f24f9-04bf-470f-8be5-b6ef9afa500b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067372488s Dec 28 21:22:44.691: INFO: Pod "projected-volume-a88f24f9-04bf-470f-8be5-b6ef9afa500b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.436246936s Dec 28 21:22:47.071: INFO: Pod "projected-volume-a88f24f9-04bf-470f-8be5-b6ef9afa500b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.816712452s Dec 28 21:22:49.077: INFO: Pod "projected-volume-a88f24f9-04bf-470f-8be5-b6ef9afa500b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.822769943s STEP: Saw pod success Dec 28 21:22:49.077: INFO: Pod "projected-volume-a88f24f9-04bf-470f-8be5-b6ef9afa500b" satisfied condition "success or failure" Dec 28 21:22:49.089: INFO: Trying to get logs from node jerma-server-4b75xjbddvit pod projected-volume-a88f24f9-04bf-470f-8be5-b6ef9afa500b container projected-all-volume-test: STEP: delete the pod Dec 28 21:22:49.175: INFO: Waiting for pod projected-volume-a88f24f9-04bf-470f-8be5-b6ef9afa500b to disappear Dec 28 21:22:49.181: INFO: Pod projected-volume-a88f24f9-04bf-470f-8be5-b6ef9afa500b no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:22:49.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5707" for this suite. • [SLOW TEST:9.163 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":43,"skipped":649,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:22:49.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-d6920a54-3d5c-40cc-9ec2-4b69003da309 STEP: Creating a pod to test consume configMaps Dec 28 21:22:49.402: INFO: Waiting up to 5m0s for pod "pod-configmaps-c8373708-81ef-41fd-9c51-26bf69aa600f" in namespace "configmap-8725" to be "success or failure" Dec 28 21:22:49.421: INFO: Pod "pod-configmaps-c8373708-81ef-41fd-9c51-26bf69aa600f": Phase="Pending", Reason="", readiness=false. Elapsed: 19.210525ms Dec 28 21:22:51.433: INFO: Pod "pod-configmaps-c8373708-81ef-41fd-9c51-26bf69aa600f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031049935s Dec 28 21:22:53.447: INFO: Pod "pod-configmaps-c8373708-81ef-41fd-9c51-26bf69aa600f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044214153s Dec 28 21:22:55.475: INFO: Pod "pod-configmaps-c8373708-81ef-41fd-9c51-26bf69aa600f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07271748s Dec 28 21:22:57.484: INFO: Pod "pod-configmaps-c8373708-81ef-41fd-9c51-26bf69aa600f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.081627924s STEP: Saw pod success Dec 28 21:22:57.484: INFO: Pod "pod-configmaps-c8373708-81ef-41fd-9c51-26bf69aa600f" satisfied condition "success or failure" Dec 28 21:22:57.489: INFO: Trying to get logs from node jerma-node pod pod-configmaps-c8373708-81ef-41fd-9c51-26bf69aa600f container configmap-volume-test: STEP: delete the pod Dec 28 21:22:57.617: INFO: Waiting for pod pod-configmaps-c8373708-81ef-41fd-9c51-26bf69aa600f to disappear Dec 28 21:22:57.624: INFO: Pod pod-configmaps-c8373708-81ef-41fd-9c51-26bf69aa600f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:22:57.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8725" for this suite. • [SLOW TEST:8.443 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":690,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:22:57.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 28 21:22:58.552: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Dec 28 21:23:00.571: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713164978, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713164978, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713164978, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713164978, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 28 21:23:02.586: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713164978, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713164978, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713164978, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713164978, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 28 21:23:04.589: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713164978, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713164978, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713164978, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713164978, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 28 21:23:07.703: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration Dec 28 21:23:07.763: INFO: Waiting for webhook configuration to be ready... STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:23:08.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7813" for this suite. STEP: Destroying namespace "webhook-7813-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.575 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":45,"skipped":692,"failed":0} SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:23:08.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-1027 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Dec 28 21:23:08.388: INFO: Found 0 stateful pods, waiting for 3 Dec 28 21:23:18.550: INFO: Found 2 stateful pods, waiting for 3 Dec 28 21:23:28.398: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 28 21:23:28.398: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 28 21:23:28.398: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Dec 28 21:23:38.402: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 28 21:23:38.403: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 28 21:23:38.403: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Dec 28 21:23:38.453: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Dec 28 21:23:48.558: INFO: Updating stateful set ss2 Dec 28 21:23:48.569: INFO: Waiting for Pod statefulset-1027/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Dec 28 21:23:58.934: INFO: Found 2 stateful pods, waiting for 3 Dec 28 21:24:08.944: INFO: Found 2 stateful pods, waiting for 3 Dec 28 21:24:19.015: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 28 21:24:19.015: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 28 21:24:19.015: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Dec 28 21:24:19.040: INFO: Updating stateful set ss2 Dec 28 21:24:19.087: INFO: Waiting for Pod statefulset-1027/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Dec 28 21:24:29.146: INFO: Updating stateful set ss2 Dec 28 21:24:29.206: INFO: Waiting for StatefulSet statefulset-1027/ss2 to complete update Dec 28 21:24:29.206: INFO: Waiting for Pod statefulset-1027/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Dec 28 21:24:39.221: INFO: Waiting for StatefulSet statefulset-1027/ss2 to complete update Dec 28 21:24:39.222: INFO: Waiting for Pod statefulset-1027/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Dec 28 21:24:49.216: INFO: Waiting for StatefulSet statefulset-1027/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Dec 28 21:24:59.225: INFO: Deleting all statefulset in ns statefulset-1027 Dec 28 21:24:59.229: INFO: Scaling statefulset ss2 to 0 Dec 28 21:25:29.292: INFO: Waiting for statefulset status.replicas updated to 0 Dec 28 21:25:29.300: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:25:29.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1027" for this suite. • [SLOW TEST:141.172 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":46,"skipped":700,"failed":0} SSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:25:29.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-2027/configmap-test-e8e9d0cd-84e3-446d-a725-50f82f8a5cc7 STEP: Creating a pod to test consume configMaps Dec 28 21:25:29.571: INFO: Waiting up to 5m0s for pod "pod-configmaps-71e3830b-d050-422d-bd07-4624e3cdabbd" in namespace "configmap-2027" to be "success or failure" Dec 28 21:25:29.577: INFO: Pod "pod-configmaps-71e3830b-d050-422d-bd07-4624e3cdabbd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.40699ms Dec 28 21:25:31.586: INFO: Pod "pod-configmaps-71e3830b-d050-422d-bd07-4624e3cdabbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015298052s Dec 28 21:25:33.598: INFO: Pod "pod-configmaps-71e3830b-d050-422d-bd07-4624e3cdabbd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026728673s Dec 28 21:25:35.670: INFO: Pod "pod-configmaps-71e3830b-d050-422d-bd07-4624e3cdabbd": Phase="Running", Reason="", readiness=true. Elapsed: 6.099106656s Dec 28 21:25:37.678: INFO: Pod "pod-configmaps-71e3830b-d050-422d-bd07-4624e3cdabbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.107429167s STEP: Saw pod success Dec 28 21:25:37.679: INFO: Pod "pod-configmaps-71e3830b-d050-422d-bd07-4624e3cdabbd" satisfied condition "success or failure" Dec 28 21:25:37.683: INFO: Trying to get logs from node jerma-node pod pod-configmaps-71e3830b-d050-422d-bd07-4624e3cdabbd container env-test: STEP: delete the pod Dec 28 21:25:37.769: INFO: Waiting for pod pod-configmaps-71e3830b-d050-422d-bd07-4624e3cdabbd to disappear Dec 28 21:25:37.776: INFO: Pod pod-configmaps-71e3830b-d050-422d-bd07-4624e3cdabbd no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:25:37.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2027" for this suite. • [SLOW TEST:8.405 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":710,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:25:37.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-3556, will wait for the garbage collector to delete the pods Dec 28 21:25:48.026: INFO: Deleting Job.batch foo took: 8.344075ms Dec 28 21:25:48.127: INFO: Terminating Job.batch foo pods took: 101.173141ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:26:26.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3556" for this suite. • [SLOW TEST:48.973 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":48,"skipped":736,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:26:26.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 28 21:26:27.941: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Dec 28 21:26:29.965: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713165187, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713165187, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713165187, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713165187, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 28 21:26:31.971: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713165187, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713165187, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713165187, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713165187, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 28 21:26:33.973: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713165187, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713165187, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713165187, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713165187, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 28 21:26:37.008: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 28 21:26:37.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5619-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:26:38.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9662" for this suite. STEP: Destroying namespace "webhook-9662-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.115 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":49,"skipped":785,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:26:38.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium Dec 28 21:26:38.946: INFO: Waiting up to 5m0s for pod "pod-031b0fdc-6b54-43bb-85fe-4a6a081c8622" in namespace "emptydir-6185" to be "success or failure" Dec 28 21:26:38.951: INFO: Pod "pod-031b0fdc-6b54-43bb-85fe-4a6a081c8622": Phase="Pending", Reason="", readiness=false. Elapsed: 4.698532ms Dec 28 21:26:40.961: INFO: Pod "pod-031b0fdc-6b54-43bb-85fe-4a6a081c8622": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014212734s Dec 28 21:26:42.970: INFO: Pod "pod-031b0fdc-6b54-43bb-85fe-4a6a081c8622": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023312468s Dec 28 21:26:44.980: INFO: Pod "pod-031b0fdc-6b54-43bb-85fe-4a6a081c8622": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033438302s Dec 28 21:26:46.988: INFO: Pod "pod-031b0fdc-6b54-43bb-85fe-4a6a081c8622": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.041306968s STEP: Saw pod success Dec 28 21:26:46.988: INFO: Pod "pod-031b0fdc-6b54-43bb-85fe-4a6a081c8622" satisfied condition "success or failure" Dec 28 21:26:46.991: INFO: Trying to get logs from node jerma-node pod pod-031b0fdc-6b54-43bb-85fe-4a6a081c8622 container test-container: STEP: delete the pod Dec 28 21:26:47.042: INFO: Waiting for pod pod-031b0fdc-6b54-43bb-85fe-4a6a081c8622 to disappear Dec 28 21:26:47.047: INFO: Pod pod-031b0fdc-6b54-43bb-85fe-4a6a081c8622 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:26:47.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6185" for this suite. • [SLOW TEST:8.192 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":50,"skipped":788,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:26:47.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 28 21:26:47.189: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:26:48.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2541" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":51,"skipped":824,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:26:48.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:26:54.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2408" for this suite. • [SLOW TEST:6.325 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":52,"skipped":842,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:26:54.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:27:54.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6832" for this suite. • [SLOW TEST:60.233 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":856,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:27:54.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 28 21:27:55.132: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"35d11a65-9b3b-4378-949c-fdc1e7fc32d0", Controller:(*bool)(0xc001aa5eba), BlockOwnerDeletion:(*bool)(0xc001aa5ebb)}} Dec 28 21:27:55.145: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"dadff428-c408-4dc6-ae2c-08cc995ed451", Controller:(*bool)(0xc003404a92), BlockOwnerDeletion:(*bool)(0xc003404a93)}} Dec 28 21:27:55.154: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"a5ac92b4-d0ac-4fd2-a173-f286e764c13d", Controller:(*bool)(0xc004db2e42), BlockOwnerDeletion:(*bool)(0xc004db2e43)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:28:00.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2152" for this suite. • [SLOW TEST:5.413 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":54,"skipped":858,"failed":0} SSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:28:00.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-568 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-568 to expose endpoints map[] Dec 28 21:28:00.654: INFO: Get endpoints failed (156.004272ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Dec 28 21:28:01.663: INFO: successfully validated that service endpoint-test2 in namespace services-568 exposes endpoints map[] (1.164500217s elapsed) STEP: Creating pod pod1 in namespace services-568 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-568 to expose endpoints map[pod1:[80]] Dec 28 21:28:05.838: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.160469537s elapsed, will retry) Dec 28 21:28:09.930: INFO: successfully validated that service endpoint-test2 in namespace services-568 exposes endpoints map[pod1:[80]] (8.252234061s elapsed) STEP: Creating pod pod2 in namespace services-568 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-568 to expose endpoints map[pod1:[80] pod2:[80]] Dec 28 21:28:14.826: INFO: Unexpected endpoints: found map[07404054-4b2a-4daa-81d1-5df7b047dec1:[80]], expected map[pod1:[80] pod2:[80]] (4.885981648s elapsed, will retry) Dec 28 21:28:16.911: INFO: successfully validated that service endpoint-test2 in namespace services-568 exposes endpoints map[pod1:[80] pod2:[80]] (6.971619757s elapsed) STEP: Deleting pod pod1 in namespace services-568 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-568 to expose endpoints map[pod2:[80]] Dec 28 21:28:17.956: INFO: successfully validated that service endpoint-test2 in namespace services-568 exposes endpoints map[pod2:[80]] (1.040365551s elapsed) STEP: Deleting pod pod2 in namespace services-568 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-568 to expose endpoints map[] Dec 28 21:28:19.016: INFO: successfully validated that service endpoint-test2 in namespace services-568 exposes endpoints map[] (1.050738286s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:28:19.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-568" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:19.268 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":55,"skipped":861,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:28:19.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 28 21:28:20.097: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Dec 28 21:28:23.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7513 create -f -' Dec 28 21:28:25.996: INFO: stderr: "" Dec 28 21:28:25.996: INFO: stdout: "e2e-test-crd-publish-openapi-8894-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Dec 28 21:28:25.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7513 delete e2e-test-crd-publish-openapi-8894-crds test-cr' Dec 28 21:28:26.193: INFO: stderr: "" Dec 28 21:28:26.193: INFO: stdout: "e2e-test-crd-publish-openapi-8894-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Dec 28 21:28:26.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7513 apply -f -' Dec 28 21:28:26.625: INFO: stderr: "" Dec 28 21:28:26.626: INFO: stdout: "e2e-test-crd-publish-openapi-8894-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Dec 28 21:28:26.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7513 delete e2e-test-crd-publish-openapi-8894-crds test-cr' Dec 28 21:28:26.741: INFO: stderr: "" Dec 28 21:28:26.741: INFO: stdout: "e2e-test-crd-publish-openapi-8894-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Dec 28 21:28:26.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8894-crds' Dec 28 21:28:27.142: INFO: stderr: "" Dec 28 21:28:27.142: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8894-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:28:30.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7513" for this suite. • [SLOW TEST:10.535 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":56,"skipped":889,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:28:30.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:28:46.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5461" for this suite. • [SLOW TEST:16.585 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":57,"skipped":893,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:28:46.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-f037ab7a-a6f7-457c-8095-61044feaa481 STEP: Creating a pod to test consume configMaps Dec 28 21:28:46.959: INFO: Waiting up to 5m0s for pod "pod-configmaps-f49b13f8-0318-4847-8fa0-f18b14c0f745" in namespace "configmap-6166" to be "success or failure" Dec 28 21:28:46.977: INFO: Pod "pod-configmaps-f49b13f8-0318-4847-8fa0-f18b14c0f745": Phase="Pending", Reason="", readiness=false. Elapsed: 17.949ms Dec 28 21:28:48.985: INFO: Pod "pod-configmaps-f49b13f8-0318-4847-8fa0-f18b14c0f745": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025713084s Dec 28 21:28:50.991: INFO: Pod "pod-configmaps-f49b13f8-0318-4847-8fa0-f18b14c0f745": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032060057s Dec 28 21:28:53.000: INFO: Pod "pod-configmaps-f49b13f8-0318-4847-8fa0-f18b14c0f745": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040833675s Dec 28 21:28:55.016: INFO: Pod "pod-configmaps-f49b13f8-0318-4847-8fa0-f18b14c0f745": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.056525529s STEP: Saw pod success Dec 28 21:28:55.016: INFO: Pod "pod-configmaps-f49b13f8-0318-4847-8fa0-f18b14c0f745" satisfied condition "success or failure" Dec 28 21:28:55.022: INFO: Trying to get logs from node jerma-node pod pod-configmaps-f49b13f8-0318-4847-8fa0-f18b14c0f745 container configmap-volume-test: STEP: delete the pod Dec 28 21:28:55.225: INFO: Waiting for pod pod-configmaps-f49b13f8-0318-4847-8fa0-f18b14c0f745 to disappear Dec 28 21:28:55.247: INFO: Pod pod-configmaps-f49b13f8-0318-4847-8fa0-f18b14c0f745 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:28:55.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6166" for this suite. • [SLOW TEST:8.601 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":58,"skipped":911,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:28:55.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions Dec 28 21:28:55.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Dec 28 21:28:55.660: INFO: stderr: "" Dec 28 21:28:55.660: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:28:55.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9137" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":59,"skipped":923,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:28:55.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-pshc STEP: Creating a pod to test atomic-volume-subpath Dec 28 21:28:55.768: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-pshc" in namespace "subpath-6965" to be "success or failure" Dec 28 21:28:55.771: INFO: Pod "pod-subpath-test-projected-pshc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.505021ms Dec 28 21:28:57.783: INFO: Pod "pod-subpath-test-projected-pshc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014960546s Dec 28 21:28:59.792: INFO: Pod "pod-subpath-test-projected-pshc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024146386s Dec 28 21:29:01.807: INFO: Pod "pod-subpath-test-projected-pshc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039631339s Dec 28 21:29:03.822: INFO: Pod "pod-subpath-test-projected-pshc": Phase="Running", Reason="", readiness=true. Elapsed: 8.053777096s Dec 28 21:29:05.839: INFO: Pod "pod-subpath-test-projected-pshc": Phase="Running", Reason="", readiness=true. Elapsed: 10.070881864s Dec 28 21:29:07.851: INFO: Pod "pod-subpath-test-projected-pshc": Phase="Running", Reason="", readiness=true. Elapsed: 12.082961332s Dec 28 21:29:09.864: INFO: Pod "pod-subpath-test-projected-pshc": Phase="Running", Reason="", readiness=true. Elapsed: 14.096561571s Dec 28 21:29:11.882: INFO: Pod "pod-subpath-test-projected-pshc": Phase="Running", Reason="", readiness=true. Elapsed: 16.114327092s Dec 28 21:29:13.902: INFO: Pod "pod-subpath-test-projected-pshc": Phase="Running", Reason="", readiness=true. Elapsed: 18.133831694s Dec 28 21:29:15.919: INFO: Pod "pod-subpath-test-projected-pshc": Phase="Running", Reason="", readiness=true. Elapsed: 20.150727109s Dec 28 21:29:17.930: INFO: Pod "pod-subpath-test-projected-pshc": Phase="Running", Reason="", readiness=true. Elapsed: 22.162361975s Dec 28 21:29:19.938: INFO: Pod "pod-subpath-test-projected-pshc": Phase="Running", Reason="", readiness=true. Elapsed: 24.170398187s Dec 28 21:29:21.946: INFO: Pod "pod-subpath-test-projected-pshc": Phase="Running", Reason="", readiness=true. Elapsed: 26.178103664s Dec 28 21:29:23.957: INFO: Pod "pod-subpath-test-projected-pshc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.189434397s STEP: Saw pod success Dec 28 21:29:23.957: INFO: Pod "pod-subpath-test-projected-pshc" satisfied condition "success or failure" Dec 28 21:29:23.962: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-projected-pshc container test-container-subpath-projected-pshc: STEP: delete the pod Dec 28 21:29:24.019: INFO: Waiting for pod pod-subpath-test-projected-pshc to disappear Dec 28 21:29:24.084: INFO: Pod pod-subpath-test-projected-pshc no longer exists STEP: Deleting pod pod-subpath-test-projected-pshc Dec 28 21:29:24.085: INFO: Deleting pod "pod-subpath-test-projected-pshc" in namespace "subpath-6965" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:29:24.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6965" for this suite. • [SLOW TEST:28.425 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":60,"skipped":933,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:29:24.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 28 21:29:24.145: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:29:32.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4777" for this suite. • [SLOW TEST:8.435 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":61,"skipped":954,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:29:32.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-1ce4ac50-db1c-4a07-8a35-71e76c4adee3 STEP: Creating a pod to test consume secrets Dec 28 21:29:32.631: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d2db0a74-9df1-4dfd-ab2e-bff5575c5646" in namespace "projected-3755" to be "success or failure" Dec 28 21:29:32.775: INFO: Pod "pod-projected-secrets-d2db0a74-9df1-4dfd-ab2e-bff5575c5646": Phase="Pending", Reason="", readiness=false. Elapsed: 143.650348ms Dec 28 21:29:34.791: INFO: Pod "pod-projected-secrets-d2db0a74-9df1-4dfd-ab2e-bff5575c5646": Phase="Pending", Reason="", readiness=false. Elapsed: 2.159542359s Dec 28 21:29:36.805: INFO: Pod "pod-projected-secrets-d2db0a74-9df1-4dfd-ab2e-bff5575c5646": Phase="Pending", Reason="", readiness=false. Elapsed: 4.173443996s Dec 28 21:29:38.812: INFO: Pod "pod-projected-secrets-d2db0a74-9df1-4dfd-ab2e-bff5575c5646": Phase="Pending", Reason="", readiness=false. Elapsed: 6.180167718s Dec 28 21:29:40.824: INFO: Pod "pod-projected-secrets-d2db0a74-9df1-4dfd-ab2e-bff5575c5646": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.192072064s STEP: Saw pod success Dec 28 21:29:40.824: INFO: Pod "pod-projected-secrets-d2db0a74-9df1-4dfd-ab2e-bff5575c5646" satisfied condition "success or failure" Dec 28 21:29:40.830: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-d2db0a74-9df1-4dfd-ab2e-bff5575c5646 container projected-secret-volume-test: STEP: delete the pod Dec 28 21:29:40.909: INFO: Waiting for pod pod-projected-secrets-d2db0a74-9df1-4dfd-ab2e-bff5575c5646 to disappear Dec 28 21:29:40.925: INFO: Pod pod-projected-secrets-d2db0a74-9df1-4dfd-ab2e-bff5575c5646 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:29:40.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3755" for this suite. • [SLOW TEST:8.439 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":995,"failed":0} SS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:29:40.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 28 21:29:41.035: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:29:49.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6779" for this suite. • [SLOW TEST:8.215 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":997,"failed":0} SSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:29:49.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Dec 28 21:29:49.287: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:30:02.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6505" for this suite. • [SLOW TEST:13.488 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":64,"skipped":1005,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:30:02.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W1228 21:30:13.035267 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 28 21:30:13.035: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:30:13.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7191" for this suite. • [SLOW TEST:10.364 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":65,"skipped":1017,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:30:13.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W1228 21:30:43.742148 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 28 21:30:43.742: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:30:43.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1461" for this suite. • [SLOW TEST:30.713 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":66,"skipped":1031,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:30:43.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Dec 28 21:30:53.225: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:30:53.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1951" for this suite. • [SLOW TEST:9.534 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":67,"skipped":1039,"failed":0} SSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:30:53.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Dec 28 21:30:53.589: INFO: Waiting up to 5m0s for pod "downward-api-9f03b408-fdd7-492b-b019-4d6a9ea2cee5" in namespace "downward-api-3845" to be "success or failure" Dec 28 21:30:53.601: INFO: Pod "downward-api-9f03b408-fdd7-492b-b019-4d6a9ea2cee5": Phase="Pending", Reason="", readiness=false. Elapsed: 11.046383ms Dec 28 21:30:55.611: INFO: Pod "downward-api-9f03b408-fdd7-492b-b019-4d6a9ea2cee5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021138795s Dec 28 21:30:57.621: INFO: Pod "downward-api-9f03b408-fdd7-492b-b019-4d6a9ea2cee5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031818499s Dec 28 21:30:59.637: INFO: Pod "downward-api-9f03b408-fdd7-492b-b019-4d6a9ea2cee5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047631697s Dec 28 21:31:01.663: INFO: Pod "downward-api-9f03b408-fdd7-492b-b019-4d6a9ea2cee5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.073005075s STEP: Saw pod success Dec 28 21:31:01.663: INFO: Pod "downward-api-9f03b408-fdd7-492b-b019-4d6a9ea2cee5" satisfied condition "success or failure" Dec 28 21:31:01.666: INFO: Trying to get logs from node jerma-node pod downward-api-9f03b408-fdd7-492b-b019-4d6a9ea2cee5 container dapi-container: STEP: delete the pod Dec 28 21:31:01.835: INFO: Waiting for pod downward-api-9f03b408-fdd7-492b-b019-4d6a9ea2cee5 to disappear Dec 28 21:31:01.843: INFO: Pod downward-api-9f03b408-fdd7-492b-b019-4d6a9ea2cee5 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:31:01.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3845" for this suite. • [SLOW TEST:8.560 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":1047,"failed":0} [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:31:01.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-fcf7 STEP: Creating a pod to test atomic-volume-subpath Dec 28 21:31:02.183: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-fcf7" in namespace "subpath-4655" to be "success or failure" Dec 28 21:31:02.212: INFO: Pod "pod-subpath-test-downwardapi-fcf7": Phase="Pending", Reason="", readiness=false. Elapsed: 28.630262ms Dec 28 21:31:04.234: INFO: Pod "pod-subpath-test-downwardapi-fcf7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049777144s Dec 28 21:31:06.243: INFO: Pod "pod-subpath-test-downwardapi-fcf7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059014609s Dec 28 21:31:08.263: INFO: Pod "pod-subpath-test-downwardapi-fcf7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079165681s Dec 28 21:31:10.275: INFO: Pod "pod-subpath-test-downwardapi-fcf7": Phase="Running", Reason="", readiness=true. Elapsed: 8.091079213s Dec 28 21:31:12.285: INFO: Pod "pod-subpath-test-downwardapi-fcf7": Phase="Running", Reason="", readiness=true. Elapsed: 10.101272528s Dec 28 21:31:14.293: INFO: Pod "pod-subpath-test-downwardapi-fcf7": Phase="Running", Reason="", readiness=true. Elapsed: 12.109672602s Dec 28 21:31:16.301: INFO: Pod "pod-subpath-test-downwardapi-fcf7": Phase="Running", Reason="", readiness=true. Elapsed: 14.117114311s Dec 28 21:31:18.309: INFO: Pod "pod-subpath-test-downwardapi-fcf7": Phase="Running", Reason="", readiness=true. Elapsed: 16.125621874s Dec 28 21:31:20.322: INFO: Pod "pod-subpath-test-downwardapi-fcf7": Phase="Running", Reason="", readiness=true. Elapsed: 18.138195049s Dec 28 21:31:22.329: INFO: Pod "pod-subpath-test-downwardapi-fcf7": Phase="Running", Reason="", readiness=true. Elapsed: 20.145684919s Dec 28 21:31:24.338: INFO: Pod "pod-subpath-test-downwardapi-fcf7": Phase="Running", Reason="", readiness=true. Elapsed: 22.154420664s Dec 28 21:31:26.345: INFO: Pod "pod-subpath-test-downwardapi-fcf7": Phase="Running", Reason="", readiness=true. Elapsed: 24.161555567s Dec 28 21:31:28.360: INFO: Pod "pod-subpath-test-downwardapi-fcf7": Phase="Running", Reason="", readiness=true. Elapsed: 26.176522183s Dec 28 21:31:30.373: INFO: Pod "pod-subpath-test-downwardapi-fcf7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.189434343s STEP: Saw pod success Dec 28 21:31:30.373: INFO: Pod "pod-subpath-test-downwardapi-fcf7" satisfied condition "success or failure" Dec 28 21:31:30.379: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-downwardapi-fcf7 container test-container-subpath-downwardapi-fcf7: STEP: delete the pod Dec 28 21:31:30.451: INFO: Waiting for pod pod-subpath-test-downwardapi-fcf7 to disappear Dec 28 21:31:30.484: INFO: Pod pod-subpath-test-downwardapi-fcf7 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-fcf7 Dec 28 21:31:30.485: INFO: Deleting pod "pod-subpath-test-downwardapi-fcf7" in namespace "subpath-4655" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:31:30.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4655" for this suite. • [SLOW TEST:28.654 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":69,"skipped":1047,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:31:30.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:31:47.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-884" for this suite. • [SLOW TEST:16.509 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":70,"skipped":1099,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:31:47.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-b84079da-9458-45c2-91da-95361225a12a STEP: Creating a pod to test consume configMaps Dec 28 21:31:47.271: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-55b4f1e7-7967-4270-8123-9d9665570552" in namespace "projected-8431" to be "success or failure" Dec 28 21:31:47.328: INFO: Pod "pod-projected-configmaps-55b4f1e7-7967-4270-8123-9d9665570552": Phase="Pending", Reason="", readiness=false. Elapsed: 57.073117ms Dec 28 21:31:49.337: INFO: Pod "pod-projected-configmaps-55b4f1e7-7967-4270-8123-9d9665570552": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066675218s Dec 28 21:31:51.350: INFO: Pod "pod-projected-configmaps-55b4f1e7-7967-4270-8123-9d9665570552": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078900184s Dec 28 21:31:53.358: INFO: Pod "pod-projected-configmaps-55b4f1e7-7967-4270-8123-9d9665570552": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087484912s Dec 28 21:31:55.371: INFO: Pod "pod-projected-configmaps-55b4f1e7-7967-4270-8123-9d9665570552": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.100612101s STEP: Saw pod success Dec 28 21:31:55.372: INFO: Pod "pod-projected-configmaps-55b4f1e7-7967-4270-8123-9d9665570552" satisfied condition "success or failure" Dec 28 21:31:55.377: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-55b4f1e7-7967-4270-8123-9d9665570552 container projected-configmap-volume-test: STEP: delete the pod Dec 28 21:31:55.444: INFO: Waiting for pod pod-projected-configmaps-55b4f1e7-7967-4270-8123-9d9665570552 to disappear Dec 28 21:31:55.452: INFO: Pod pod-projected-configmaps-55b4f1e7-7967-4270-8123-9d9665570552 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:31:55.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8431" for this suite. • [SLOW TEST:8.503 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":71,"skipped":1144,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:31:55.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Dec 28 21:32:04.365: INFO: Successfully updated pod "annotationupdate5a8c7985-ae06-44b0-a26b-4e7f581db3ec" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:32:06.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8060" for this suite. • [SLOW TEST:10.895 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1175,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:32:06.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Dec 28 21:32:22.808: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 28 21:32:22.828: INFO: Pod pod-with-prestop-http-hook still exists Dec 28 21:32:24.829: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 28 21:32:24.947: INFO: Pod pod-with-prestop-http-hook still exists Dec 28 21:32:26.829: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 28 21:32:26.836: INFO: Pod pod-with-prestop-http-hook still exists Dec 28 21:32:28.829: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 28 21:32:28.840: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:32:28.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-595" for this suite. • [SLOW TEST:22.452 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1239,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:32:28.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-5861 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Dec 28 21:32:29.068: INFO: Found 0 stateful pods, waiting for 3 Dec 28 21:32:39.075: INFO: Found 2 stateful pods, waiting for 3 Dec 28 21:32:49.109: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 28 21:32:49.109: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 28 21:32:49.109: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Dec 28 21:32:59.079: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 28 21:32:59.079: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 28 21:32:59.079: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Dec 28 21:32:59.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5861 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Dec 28 21:32:59.494: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Dec 28 21:32:59.494: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Dec 28 21:32:59.494: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Dec 28 21:33:09.540: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Dec 28 21:33:19.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5861 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 28 21:33:20.084: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Dec 28 21:33:20.084: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Dec 28 21:33:20.084: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Dec 28 21:33:30.116: INFO: Waiting for StatefulSet statefulset-5861/ss2 to complete update Dec 28 21:33:30.116: INFO: Waiting for Pod statefulset-5861/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Dec 28 21:33:30.116: INFO: Waiting for Pod statefulset-5861/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Dec 28 21:33:40.132: INFO: Waiting for StatefulSet statefulset-5861/ss2 to complete update Dec 28 21:33:40.132: INFO: Waiting for Pod statefulset-5861/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Dec 28 21:33:40.132: INFO: Waiting for Pod statefulset-5861/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Dec 28 21:33:50.132: INFO: Waiting for StatefulSet statefulset-5861/ss2 to complete update Dec 28 21:33:50.132: INFO: Waiting for Pod statefulset-5861/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Dec 28 21:34:00.130: INFO: Waiting for StatefulSet statefulset-5861/ss2 to complete update Dec 28 21:34:00.130: INFO: Waiting for Pod statefulset-5861/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Dec 28 21:34:10.128: INFO: Waiting for StatefulSet statefulset-5861/ss2 to complete update STEP: Rolling back to a previous revision Dec 28 21:34:20.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5861 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Dec 28 21:34:20.818: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Dec 28 21:34:20.818: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Dec 28 21:34:20.818: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Dec 28 21:34:30.882: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Dec 28 21:34:40.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5861 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 28 21:34:41.434: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Dec 28 21:34:41.434: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Dec 28 21:34:41.434: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Dec 28 21:34:51.478: INFO: Waiting for StatefulSet statefulset-5861/ss2 to complete update Dec 28 21:34:51.478: INFO: Waiting for Pod statefulset-5861/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Dec 28 21:34:51.478: INFO: Waiting for Pod statefulset-5861/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Dec 28 21:35:01.492: INFO: Waiting for StatefulSet statefulset-5861/ss2 to complete update Dec 28 21:35:01.492: INFO: Waiting for Pod statefulset-5861/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Dec 28 21:35:01.492: INFO: Waiting for Pod statefulset-5861/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Dec 28 21:35:12.080: INFO: Waiting for StatefulSet statefulset-5861/ss2 to complete update Dec 28 21:35:12.080: INFO: Waiting for Pod statefulset-5861/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Dec 28 21:35:21.496: INFO: Waiting for StatefulSet statefulset-5861/ss2 to complete update Dec 28 21:35:21.496: INFO: Waiting for Pod statefulset-5861/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Dec 28 21:35:31.492: INFO: Waiting for StatefulSet statefulset-5861/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Dec 28 21:35:41.490: INFO: Deleting all statefulset in ns statefulset-5861 Dec 28 21:35:41.493: INFO: Scaling statefulset ss2 to 0 Dec 28 21:36:11.519: INFO: Waiting for statefulset status.replicas updated to 0 Dec 28 21:36:11.523: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:36:11.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5861" for this suite. • [SLOW TEST:222.705 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":74,"skipped":1266,"failed":0} SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:36:11.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-5120 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-5120 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5120 Dec 28 21:36:11.761: INFO: Found 0 stateful pods, waiting for 1 Dec 28 21:36:21.774: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Dec 28 21:36:21.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Dec 28 21:36:22.383: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Dec 28 21:36:22.383: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Dec 28 21:36:22.383: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Dec 28 21:36:22.423: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 28 21:36:22.424: INFO: Waiting for statefulset status.replicas updated to 0 Dec 28 21:36:22.527: INFO: POD NODE PHASE GRACE CONDITIONS Dec 28 21:36:22.528: INFO: ss-0 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:11 +0000 UTC }] Dec 28 21:36:22.528: INFO: Dec 28 21:36:22.528: INFO: StatefulSet ss has not reached scale 3, at 1 Dec 28 21:36:23.749: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.968688048s Dec 28 21:36:24.769: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.748043412s Dec 28 21:36:25.780: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.728090942s Dec 28 21:36:27.613: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.716738319s Dec 28 21:36:28.713: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.88314002s Dec 28 21:36:29.722: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.783206342s Dec 28 21:36:30.734: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.774917694s Dec 28 21:36:31.743: INFO: Verifying statefulset ss doesn't scale past 3 for another 763.203491ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5120 Dec 28 21:36:32.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 28 21:36:33.197: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Dec 28 21:36:33.197: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Dec 28 21:36:33.197: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Dec 28 21:36:33.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 28 21:36:33.684: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Dec 28 21:36:33.684: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Dec 28 21:36:33.684: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Dec 28 21:36:33.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 28 21:36:34.149: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Dec 28 21:36:34.149: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Dec 28 21:36:34.149: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Dec 28 21:36:34.160: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 28 21:36:34.160: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 28 21:36:34.160: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Dec 28 21:36:34.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Dec 28 21:36:34.518: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Dec 28 21:36:34.519: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Dec 28 21:36:34.519: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Dec 28 21:36:34.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Dec 28 21:36:34.946: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Dec 28 21:36:34.946: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Dec 28 21:36:34.946: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Dec 28 21:36:34.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Dec 28 21:36:35.432: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Dec 28 21:36:35.433: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Dec 28 21:36:35.433: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Dec 28 21:36:35.433: INFO: Waiting for statefulset status.replicas updated to 0 Dec 28 21:36:35.593: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Dec 28 21:36:45.609: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 28 21:36:45.610: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Dec 28 21:36:45.610: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Dec 28 21:36:45.681: INFO: POD NODE PHASE GRACE CONDITIONS Dec 28 21:36:45.681: INFO: ss-0 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:11 +0000 UTC }] Dec 28 21:36:45.682: INFO: ss-1 jerma-server-4b75xjbddvit Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:22 +0000 UTC }] Dec 28 21:36:45.682: INFO: ss-2 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:22 +0000 UTC }] Dec 28 21:36:45.682: INFO: Dec 28 21:36:45.682: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 28 21:36:47.305: INFO: POD NODE PHASE GRACE CONDITIONS Dec 28 21:36:47.306: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:11 +0000 UTC }] Dec 28 21:36:47.306: INFO: ss-1 jerma-server-4b75xjbddvit Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:22 +0000 UTC }] Dec 28 21:36:47.306: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:22 +0000 UTC }] Dec 28 21:36:47.306: INFO: Dec 28 21:36:47.307: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 28 21:36:48.314: INFO: POD NODE PHASE GRACE CONDITIONS Dec 28 21:36:48.314: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:11 +0000 UTC }] Dec 28 21:36:48.315: INFO: ss-1 jerma-server-4b75xjbddvit Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:22 +0000 UTC }] Dec 28 21:36:48.315: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:22 +0000 UTC }] Dec 28 21:36:48.315: INFO: Dec 28 21:36:48.315: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 28 21:36:49.324: INFO: POD NODE PHASE GRACE CONDITIONS Dec 28 21:36:49.324: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:11 +0000 UTC }] Dec 28 21:36:49.324: INFO: ss-1 jerma-server-4b75xjbddvit Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:22 +0000 UTC }] Dec 28 21:36:49.325: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:22 +0000 UTC }] Dec 28 21:36:49.325: INFO: Dec 28 21:36:49.325: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 28 21:36:50.331: INFO: POD NODE PHASE GRACE CONDITIONS Dec 28 21:36:50.331: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:11 +0000 UTC }] Dec 28 21:36:50.331: INFO: ss-1 jerma-server-4b75xjbddvit Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:22 +0000 UTC }] Dec 28 21:36:50.331: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:22 +0000 UTC }] Dec 28 21:36:50.331: INFO: Dec 28 21:36:50.331: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 28 21:36:51.342: INFO: POD NODE PHASE GRACE CONDITIONS Dec 28 21:36:51.342: INFO: ss-0 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:11 +0000 UTC }] Dec 28 21:36:51.343: INFO: ss-1 jerma-server-4b75xjbddvit Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:22 +0000 UTC }] Dec 28 21:36:51.343: INFO: Dec 28 21:36:51.343: INFO: StatefulSet ss has not reached scale 0, at 2 Dec 28 21:36:52.350: INFO: POD NODE PHASE GRACE CONDITIONS Dec 28 21:36:52.350: INFO: ss-0 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:11 +0000 UTC }] Dec 28 21:36:52.350: INFO: ss-1 jerma-server-4b75xjbddvit Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:22 +0000 UTC }] Dec 28 21:36:52.350: INFO: Dec 28 21:36:52.350: INFO: StatefulSet ss has not reached scale 0, at 2 Dec 28 21:36:53.366: INFO: POD NODE PHASE GRACE CONDITIONS Dec 28 21:36:53.366: INFO: ss-0 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:11 +0000 UTC }] Dec 28 21:36:53.366: INFO: ss-1 jerma-server-4b75xjbddvit Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:22 +0000 UTC }] Dec 28 21:36:53.367: INFO: Dec 28 21:36:53.367: INFO: StatefulSet ss has not reached scale 0, at 2 Dec 28 21:36:54.375: INFO: POD NODE PHASE GRACE CONDITIONS Dec 28 21:36:54.376: INFO: ss-0 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:11 +0000 UTC }] Dec 28 21:36:54.376: INFO: ss-1 jerma-server-4b75xjbddvit Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:22 +0000 UTC }] Dec 28 21:36:54.376: INFO: Dec 28 21:36:54.376: INFO: StatefulSet ss has not reached scale 0, at 2 Dec 28 21:36:55.386: INFO: POD NODE PHASE GRACE CONDITIONS Dec 28 21:36:55.386: INFO: ss-0 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:11 +0000 UTC }] Dec 28 21:36:55.386: INFO: ss-1 jerma-server-4b75xjbddvit Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 21:36:22 +0000 UTC }] Dec 28 21:36:55.386: INFO: Dec 28 21:36:55.386: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5120 Dec 28 21:36:56.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 28 21:36:56.738: INFO: rc: 1 Dec 28 21:36:56.738: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 28 21:37:06.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 28 21:37:06.907: INFO: rc: 1 Dec 28 21:37:06.907: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 28 21:37:16.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 28 21:37:17.082: INFO: rc: 1 Dec 28 21:37:17.082: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 28 21:37:27.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 28 21:37:27.269: INFO: rc: 1 Dec 28 21:37:27.269: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 28 21:37:37.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 28 21:37:37.437: INFO: rc: 1 Dec 28 21:37:37.437: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 28 21:37:47.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 28 21:37:47.577: INFO: rc: 1 Dec 28 21:37:47.577: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 28 21:37:57.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 28 21:37:57.791: INFO: rc: 1 Dec 28 21:37:57.791: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 28 21:38:07.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 28 21:38:08.028: INFO: rc: 1 Dec 28 21:38:08.028: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 28 21:38:18.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 28 21:38:18.177: INFO: rc: 1 Dec 28 21:38:18.177: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 28 21:38:28.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 28 21:38:30.203: INFO: rc: 1 Dec 28 21:38:30.204: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 28 21:38:40.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 28 21:38:40.389: INFO: rc: 1 Dec 28 21:38:40.389: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 28 21:38:50.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 28 21:38:50.617: INFO: rc: 1 Dec 28 21:38:50.618: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 28 21:39:00.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 28 21:39:00.895: INFO: rc: 1 Dec 28 21:39:00.896: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 28 21:39:10.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 28 21:39:11.018: INFO: rc: 1 Dec 28 21:39:11.018: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 28 21:39:21.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 28 21:39:21.217: INFO: rc: 1 Dec 28 21:39:21.217: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 28 21:39:31.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 28 21:39:31.398: INFO: rc: 1 Dec 28 21:39:31.398: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 28 21:39:41.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 28 21:39:41.572: INFO: rc: 1 Dec 28 21:39:41.572: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 28 21:39:51.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 28 21:39:51.760: INFO: rc: 1 Dec 28 21:39:51.760: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 28 21:40:01.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 28 21:40:01.983: INFO: rc: 1 Dec 28 21:40:01.984: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 28 21:40:11.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 28 21:40:12.251: INFO: rc: 1 Dec 28 21:40:12.252: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 28 21:40:22.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 28 21:40:22.455: INFO: rc: 1 Dec 28 21:40:22.456: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 28 21:40:32.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 28 21:40:32.605: INFO: rc: 1 Dec 28 21:40:32.605: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 28 21:40:42.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 28 21:40:42.753: INFO: rc: 1 Dec 28 21:40:42.753: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 28 21:40:52.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 28 21:40:52.955: INFO: rc: 1 Dec 28 21:40:52.956: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 28 21:41:02.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 28 21:41:03.109: INFO: rc: 1 Dec 28 21:41:03.109: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 28 21:41:13.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 28 21:41:13.342: INFO: rc: 1 Dec 28 21:41:13.342: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 28 21:41:23.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 28 21:41:23.596: INFO: rc: 1 Dec 28 21:41:23.596: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 28 21:41:33.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 28 21:41:33.759: INFO: rc: 1 Dec 28 21:41:33.760: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 28 21:41:43.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 28 21:41:43.938: INFO: rc: 1 Dec 28 21:41:43.938: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 28 21:41:53.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 28 21:41:54.132: INFO: rc: 1 Dec 28 21:41:54.133: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 28 21:42:04.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5120 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 28 21:42:04.281: INFO: rc: 1 Dec 28 21:42:04.281: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Dec 28 21:42:04.281: INFO: Scaling statefulset ss to 0 Dec 28 21:42:04.297: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Dec 28 21:42:04.301: INFO: Deleting all statefulset in ns statefulset-5120 Dec 28 21:42:04.307: INFO: Scaling statefulset ss to 0 Dec 28 21:42:04.337: INFO: Waiting for statefulset status.replicas updated to 0 Dec 28 21:42:04.342: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:42:04.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5120" for this suite. • [SLOW TEST:352.815 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":75,"skipped":1273,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:42:04.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Dec 28 21:42:13.137: INFO: Successfully updated pod "pod-update-activedeadlineseconds-f7d6c69f-ce76-49d3-a02e-f68e02bd08b6" Dec 28 21:42:13.138: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-f7d6c69f-ce76-49d3-a02e-f68e02bd08b6" in namespace "pods-6492" to be "terminated due to deadline exceeded" Dec 28 21:42:13.149: INFO: Pod "pod-update-activedeadlineseconds-f7d6c69f-ce76-49d3-a02e-f68e02bd08b6": Phase="Running", Reason="", readiness=true. Elapsed: 10.947522ms Dec 28 21:42:15.159: INFO: Pod "pod-update-activedeadlineseconds-f7d6c69f-ce76-49d3-a02e-f68e02bd08b6": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.021769088s Dec 28 21:42:15.160: INFO: Pod "pod-update-activedeadlineseconds-f7d6c69f-ce76-49d3-a02e-f68e02bd08b6" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:42:15.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6492" for this suite. • [SLOW TEST:10.780 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":76,"skipped":1283,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:42:15.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Dec 28 21:42:15.404: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9173 /api/v1/namespaces/watch-9173/configmaps/e2e-watch-test-configmap-a 1f2394b2-c2cb-4772-adf7-9172bd0f2104 10429095 0 2019-12-28 21:42:15 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 28 21:42:15.404: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9173 /api/v1/namespaces/watch-9173/configmaps/e2e-watch-test-configmap-a 1f2394b2-c2cb-4772-adf7-9172bd0f2104 10429095 0 2019-12-28 21:42:15 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Dec 28 21:42:25.430: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9173 /api/v1/namespaces/watch-9173/configmaps/e2e-watch-test-configmap-a 1f2394b2-c2cb-4772-adf7-9172bd0f2104 10429122 0 2019-12-28 21:42:15 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Dec 28 21:42:25.431: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9173 /api/v1/namespaces/watch-9173/configmaps/e2e-watch-test-configmap-a 1f2394b2-c2cb-4772-adf7-9172bd0f2104 10429122 0 2019-12-28 21:42:15 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Dec 28 21:42:35.483: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9173 /api/v1/namespaces/watch-9173/configmaps/e2e-watch-test-configmap-a 1f2394b2-c2cb-4772-adf7-9172bd0f2104 10429136 0 2019-12-28 21:42:15 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 28 21:42:35.483: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9173 /api/v1/namespaces/watch-9173/configmaps/e2e-watch-test-configmap-a 1f2394b2-c2cb-4772-adf7-9172bd0f2104 10429136 0 2019-12-28 21:42:15 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Dec 28 21:42:45.494: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9173 /api/v1/namespaces/watch-9173/configmaps/e2e-watch-test-configmap-a 1f2394b2-c2cb-4772-adf7-9172bd0f2104 10429154 0 2019-12-28 21:42:15 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 28 21:42:45.495: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9173 /api/v1/namespaces/watch-9173/configmaps/e2e-watch-test-configmap-a 1f2394b2-c2cb-4772-adf7-9172bd0f2104 10429154 0 2019-12-28 21:42:15 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Dec 28 21:42:55.509: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9173 /api/v1/namespaces/watch-9173/configmaps/e2e-watch-test-configmap-b 347f90e2-f4fb-4f75-af22-97b888fe5e2f 10429167 0 2019-12-28 21:42:55 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 28 21:42:55.509: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9173 /api/v1/namespaces/watch-9173/configmaps/e2e-watch-test-configmap-b 347f90e2-f4fb-4f75-af22-97b888fe5e2f 10429167 0 2019-12-28 21:42:55 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Dec 28 21:43:05.525: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9173 /api/v1/namespaces/watch-9173/configmaps/e2e-watch-test-configmap-b 347f90e2-f4fb-4f75-af22-97b888fe5e2f 10429181 0 2019-12-28 21:42:55 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 28 21:43:05.525: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9173 /api/v1/namespaces/watch-9173/configmaps/e2e-watch-test-configmap-b 347f90e2-f4fb-4f75-af22-97b888fe5e2f 10429181 0 2019-12-28 21:42:55 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:43:15.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9173" for this suite. • [SLOW TEST:60.371 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":77,"skipped":1290,"failed":0} SSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:43:15.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 28 21:43:15.694: INFO: Creating ReplicaSet my-hostname-basic-93a994de-1d9f-4325-a00b-3a53d9958893 Dec 28 21:43:15.713: INFO: Pod name my-hostname-basic-93a994de-1d9f-4325-a00b-3a53d9958893: Found 0 pods out of 1 Dec 28 21:43:20.732: INFO: Pod name my-hostname-basic-93a994de-1d9f-4325-a00b-3a53d9958893: Found 1 pods out of 1 Dec 28 21:43:20.732: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-93a994de-1d9f-4325-a00b-3a53d9958893" is running Dec 28 21:43:22.750: INFO: Pod "my-hostname-basic-93a994de-1d9f-4325-a00b-3a53d9958893-psgwc" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 21:43:15 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 21:43:15 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-93a994de-1d9f-4325-a00b-3a53d9958893]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 21:43:15 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-93a994de-1d9f-4325-a00b-3a53d9958893]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 21:43:15 +0000 UTC Reason: Message:}]) Dec 28 21:43:22.750: INFO: Trying to dial the pod Dec 28 21:43:27.781: INFO: Controller my-hostname-basic-93a994de-1d9f-4325-a00b-3a53d9958893: Got expected result from replica 1 [my-hostname-basic-93a994de-1d9f-4325-a00b-3a53d9958893-psgwc]: "my-hostname-basic-93a994de-1d9f-4325-a00b-3a53d9958893-psgwc", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:43:27.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5637" for this suite. • [SLOW TEST:12.252 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":78,"skipped":1294,"failed":0} SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:43:27.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Dec 28 21:43:28.048: INFO: PodSpec: initContainers in spec.initContainers Dec 28 21:44:29.887: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-be4a41bc-26a6-4278-a43f-223a71a1c821", GenerateName:"", Namespace:"init-container-4677", SelfLink:"/api/v1/namespaces/init-container-4677/pods/pod-init-be4a41bc-26a6-4278-a43f-223a71a1c821", UID:"d18d412a-d8a0-4abc-9fce-c7d0e7bfc914", ResourceVersion:"10429359", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63713166208, loc:(*time.Location)(0x7d100a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"48539571"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-9r98z", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002a08000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-9r98z", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-9r98z", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-9r98z", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc004f00078), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0023af5c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004f00120)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004f00140)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc004f00148), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc004f0014c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166208, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166208, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166208, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166208, loc:(*time.Location)(0x7d100a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.2.170", PodIP:"10.44.0.2", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.44.0.2"}}, StartTime:(*v1.Time)(0xc0028680c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002c3a070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002c3a0e0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://24bc273a27f80b9aa690f20c1bf6200fe025ed4fc606c8bd4676029ff341c6c0", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002868140), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002868100), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc004f001df)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:44:29.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4677" for this suite. • [SLOW TEST:62.106 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":79,"skipped":1301,"failed":0} S ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:44:29.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Dec 28 21:44:30.066: INFO: Waiting up to 5m0s for pod "downward-api-6f7c5b80-8426-434f-89df-9ffaa7c577fb" in namespace "downward-api-5250" to be "success or failure" Dec 28 21:44:30.080: INFO: Pod "downward-api-6f7c5b80-8426-434f-89df-9ffaa7c577fb": Phase="Pending", Reason="", readiness=false. Elapsed: 14.265981ms Dec 28 21:44:32.096: INFO: Pod "downward-api-6f7c5b80-8426-434f-89df-9ffaa7c577fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029485922s Dec 28 21:44:34.102: INFO: Pod "downward-api-6f7c5b80-8426-434f-89df-9ffaa7c577fb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035865799s Dec 28 21:44:36.109: INFO: Pod "downward-api-6f7c5b80-8426-434f-89df-9ffaa7c577fb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0430665s Dec 28 21:44:38.117: INFO: Pod "downward-api-6f7c5b80-8426-434f-89df-9ffaa7c577fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051093107s STEP: Saw pod success Dec 28 21:44:38.117: INFO: Pod "downward-api-6f7c5b80-8426-434f-89df-9ffaa7c577fb" satisfied condition "success or failure" Dec 28 21:44:38.120: INFO: Trying to get logs from node jerma-node pod downward-api-6f7c5b80-8426-434f-89df-9ffaa7c577fb container dapi-container: STEP: delete the pod Dec 28 21:44:38.161: INFO: Waiting for pod downward-api-6f7c5b80-8426-434f-89df-9ffaa7c577fb to disappear Dec 28 21:44:38.176: INFO: Pod downward-api-6f7c5b80-8426-434f-89df-9ffaa7c577fb no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:44:38.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5250" for this suite. • [SLOW TEST:8.275 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":80,"skipped":1302,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:44:38.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Dec 28 21:44:38.335: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c7ba0b75-abd4-4c6b-88fe-8e1f17cf90aa" in namespace "projected-1312" to be "success or failure" Dec 28 21:44:38.373: INFO: Pod "downwardapi-volume-c7ba0b75-abd4-4c6b-88fe-8e1f17cf90aa": Phase="Pending", Reason="", readiness=false. Elapsed: 37.444185ms Dec 28 21:44:40.385: INFO: Pod "downwardapi-volume-c7ba0b75-abd4-4c6b-88fe-8e1f17cf90aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049497904s Dec 28 21:44:42.400: INFO: Pod "downwardapi-volume-c7ba0b75-abd4-4c6b-88fe-8e1f17cf90aa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064525637s Dec 28 21:44:44.407: INFO: Pod "downwardapi-volume-c7ba0b75-abd4-4c6b-88fe-8e1f17cf90aa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071777929s Dec 28 21:44:46.416: INFO: Pod "downwardapi-volume-c7ba0b75-abd4-4c6b-88fe-8e1f17cf90aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.080625995s STEP: Saw pod success Dec 28 21:44:46.416: INFO: Pod "downwardapi-volume-c7ba0b75-abd4-4c6b-88fe-8e1f17cf90aa" satisfied condition "success or failure" Dec 28 21:44:46.421: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-c7ba0b75-abd4-4c6b-88fe-8e1f17cf90aa container client-container: STEP: delete the pod Dec 28 21:44:46.464: INFO: Waiting for pod downwardapi-volume-c7ba0b75-abd4-4c6b-88fe-8e1f17cf90aa to disappear Dec 28 21:44:46.470: INFO: Pod downwardapi-volume-c7ba0b75-abd4-4c6b-88fe-8e1f17cf90aa no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:44:46.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1312" for this suite. • [SLOW TEST:8.334 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":81,"skipped":1313,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:44:46.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 28 21:44:46.719: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:44:48.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9267" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":82,"skipped":1314,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:44:48.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:44:48.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6219" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":83,"skipped":1318,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:44:48.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Dec 28 21:44:48.645: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3645 /api/v1/namespaces/watch-3645/configmaps/e2e-watch-test-label-changed c1d9ce54-615a-45d1-82e9-26e98d8974e5 10429466 0 2019-12-28 21:44:48 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 28 21:44:48.645: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3645 /api/v1/namespaces/watch-3645/configmaps/e2e-watch-test-label-changed c1d9ce54-615a-45d1-82e9-26e98d8974e5 10429467 0 2019-12-28 21:44:48 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Dec 28 21:44:48.645: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3645 /api/v1/namespaces/watch-3645/configmaps/e2e-watch-test-label-changed c1d9ce54-615a-45d1-82e9-26e98d8974e5 10429468 0 2019-12-28 21:44:48 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Dec 28 21:44:58.742: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3645 /api/v1/namespaces/watch-3645/configmaps/e2e-watch-test-label-changed c1d9ce54-615a-45d1-82e9-26e98d8974e5 10429503 0 2019-12-28 21:44:48 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 28 21:44:58.744: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3645 /api/v1/namespaces/watch-3645/configmaps/e2e-watch-test-label-changed c1d9ce54-615a-45d1-82e9-26e98d8974e5 10429504 0 2019-12-28 21:44:48 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Dec 28 21:44:58.744: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3645 /api/v1/namespaces/watch-3645/configmaps/e2e-watch-test-label-changed c1d9ce54-615a-45d1-82e9-26e98d8974e5 10429505 0 2019-12-28 21:44:48 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:44:58.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3645" for this suite. • [SLOW TEST:10.381 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":84,"skipped":1338,"failed":0} SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:44:58.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 28 21:44:59.010: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Dec 28 21:44:59.022: INFO: Number of nodes with available pods: 0 Dec 28 21:44:59.022: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Dec 28 21:44:59.092: INFO: Number of nodes with available pods: 0 Dec 28 21:44:59.092: INFO: Node jerma-node is running more than one daemon pod Dec 28 21:45:00.104: INFO: Number of nodes with available pods: 0 Dec 28 21:45:00.104: INFO: Node jerma-node is running more than one daemon pod Dec 28 21:45:01.102: INFO: Number of nodes with available pods: 0 Dec 28 21:45:01.103: INFO: Node jerma-node is running more than one daemon pod Dec 28 21:45:02.102: INFO: Number of nodes with available pods: 0 Dec 28 21:45:02.103: INFO: Node jerma-node is running more than one daemon pod Dec 28 21:45:03.100: INFO: Number of nodes with available pods: 0 Dec 28 21:45:03.101: INFO: Node jerma-node is running more than one daemon pod Dec 28 21:45:04.102: INFO: Number of nodes with available pods: 0 Dec 28 21:45:04.102: INFO: Node jerma-node is running more than one daemon pod Dec 28 21:45:05.100: INFO: Number of nodes with available pods: 0 Dec 28 21:45:05.101: INFO: Node jerma-node is running more than one daemon pod Dec 28 21:45:06.104: INFO: Number of nodes with available pods: 1 Dec 28 21:45:06.104: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Dec 28 21:45:06.159: INFO: Number of nodes with available pods: 1 Dec 28 21:45:06.160: INFO: Number of running nodes: 0, number of available pods: 1 Dec 28 21:45:07.166: INFO: Number of nodes with available pods: 0 Dec 28 21:45:07.166: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Dec 28 21:45:07.199: INFO: Number of nodes with available pods: 0 Dec 28 21:45:07.199: INFO: Node jerma-node is running more than one daemon pod Dec 28 21:45:08.207: INFO: Number of nodes with available pods: 0 Dec 28 21:45:08.208: INFO: Node jerma-node is running more than one daemon pod Dec 28 21:45:09.207: INFO: Number of nodes with available pods: 0 Dec 28 21:45:09.207: INFO: Node jerma-node is running more than one daemon pod Dec 28 21:45:10.211: INFO: Number of nodes with available pods: 0 Dec 28 21:45:10.211: INFO: Node jerma-node is running more than one daemon pod Dec 28 21:45:11.212: INFO: Number of nodes with available pods: 0 Dec 28 21:45:11.212: INFO: Node jerma-node is running more than one daemon pod Dec 28 21:45:12.205: INFO: Number of nodes with available pods: 0 Dec 28 21:45:12.205: INFO: Node jerma-node is running more than one daemon pod Dec 28 21:45:13.207: INFO: Number of nodes with available pods: 0 Dec 28 21:45:13.207: INFO: Node jerma-node is running more than one daemon pod Dec 28 21:45:14.209: INFO: Number of nodes with available pods: 0 Dec 28 21:45:14.209: INFO: Node jerma-node is running more than one daemon pod Dec 28 21:45:15.207: INFO: Number of nodes with available pods: 0 Dec 28 21:45:15.207: INFO: Node jerma-node is running more than one daemon pod Dec 28 21:45:16.207: INFO: Number of nodes with available pods: 0 Dec 28 21:45:16.208: INFO: Node jerma-node is running more than one daemon pod Dec 28 21:45:17.206: INFO: Number of nodes with available pods: 0 Dec 28 21:45:17.206: INFO: Node jerma-node is running more than one daemon pod Dec 28 21:45:18.261: INFO: Number of nodes with available pods: 0 Dec 28 21:45:18.261: INFO: Node jerma-node is running more than one daemon pod Dec 28 21:45:19.213: INFO: Number of nodes with available pods: 0 Dec 28 21:45:19.213: INFO: Node jerma-node is running more than one daemon pod Dec 28 21:45:20.210: INFO: Number of nodes with available pods: 0 Dec 28 21:45:20.210: INFO: Node jerma-node is running more than one daemon pod Dec 28 21:45:21.207: INFO: Number of nodes with available pods: 0 Dec 28 21:45:21.207: INFO: Node jerma-node is running more than one daemon pod Dec 28 21:45:22.205: INFO: Number of nodes with available pods: 0 Dec 28 21:45:22.205: INFO: Node jerma-node is running more than one daemon pod Dec 28 21:45:23.244: INFO: Number of nodes with available pods: 1 Dec 28 21:45:23.244: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1982, will wait for the garbage collector to delete the pods Dec 28 21:45:23.317: INFO: Deleting DaemonSet.extensions daemon-set took: 10.353642ms Dec 28 21:45:23.618: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.75994ms Dec 28 21:45:36.843: INFO: Number of nodes with available pods: 0 Dec 28 21:45:36.843: INFO: Number of running nodes: 0, number of available pods: 0 Dec 28 21:45:36.853: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1982/daemonsets","resourceVersion":"10429610"},"items":null} Dec 28 21:45:36.865: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1982/pods","resourceVersion":"10429610"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:45:36.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1982" for this suite. • [SLOW TEST:38.229 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":85,"skipped":1344,"failed":0} SSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:45:37.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 28 21:45:45.159: INFO: Waiting up to 5m0s for pod "client-envvars-37f54d13-7ef5-487f-a981-bb7c84e9772e" in namespace "pods-8103" to be "success or failure" Dec 28 21:45:45.214: INFO: Pod "client-envvars-37f54d13-7ef5-487f-a981-bb7c84e9772e": Phase="Pending", Reason="", readiness=false. Elapsed: 53.844333ms Dec 28 21:45:47.223: INFO: Pod "client-envvars-37f54d13-7ef5-487f-a981-bb7c84e9772e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06317939s Dec 28 21:45:49.237: INFO: Pod "client-envvars-37f54d13-7ef5-487f-a981-bb7c84e9772e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076834394s Dec 28 21:45:51.275: INFO: Pod "client-envvars-37f54d13-7ef5-487f-a981-bb7c84e9772e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.114978566s STEP: Saw pod success Dec 28 21:45:51.275: INFO: Pod "client-envvars-37f54d13-7ef5-487f-a981-bb7c84e9772e" satisfied condition "success or failure" Dec 28 21:45:51.281: INFO: Trying to get logs from node jerma-node pod client-envvars-37f54d13-7ef5-487f-a981-bb7c84e9772e container env3cont: STEP: delete the pod Dec 28 21:45:51.357: INFO: Waiting for pod client-envvars-37f54d13-7ef5-487f-a981-bb7c84e9772e to disappear Dec 28 21:45:51.418: INFO: Pod client-envvars-37f54d13-7ef5-487f-a981-bb7c84e9772e no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:45:51.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8103" for this suite. • [SLOW TEST:14.435 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1350,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:45:51.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 28 21:45:51.635: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Dec 28 21:45:56.694: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Dec 28 21:46:00.710: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Dec 28 21:46:00.743: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-6956 /apis/apps/v1/namespaces/deployment-6956/deployments/test-cleanup-deployment 47f73b45-b807-47c1-a306-1c20dc93dc13 10429725 1 2019-12-28 21:46:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0027ac8f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Dec 28 21:46:00.752: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:46:00.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6956" for this suite. • [SLOW TEST:9.481 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":87,"skipped":1360,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:46:00.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-1b025c45-c2bd-4497-b066-6ce87dd9f6b6 in namespace container-probe-6844 Dec 28 21:46:11.109: INFO: Started pod liveness-1b025c45-c2bd-4497-b066-6ce87dd9f6b6 in namespace container-probe-6844 STEP: checking the pod's current state and verifying that restartCount is present Dec 28 21:46:11.112: INFO: Initial restart count of pod liveness-1b025c45-c2bd-4497-b066-6ce87dd9f6b6 is 0 Dec 28 21:46:29.383: INFO: Restart count of pod container-probe-6844/liveness-1b025c45-c2bd-4497-b066-6ce87dd9f6b6 is now 1 (18.271279677s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:46:29.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6844" for this suite. • [SLOW TEST:28.551 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":88,"skipped":1373,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:46:29.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-1cf9cfc1-fac1-4056-9103-ccb62c395ce5 STEP: Creating a pod to test consume configMaps Dec 28 21:46:29.589: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c4587deb-144c-4535-9965-42c74a305f47" in namespace "projected-9515" to be "success or failure" Dec 28 21:46:30.153: INFO: Pod "pod-projected-configmaps-c4587deb-144c-4535-9965-42c74a305f47": Phase="Pending", Reason="", readiness=false. Elapsed: 564.294066ms Dec 28 21:46:32.165: INFO: Pod "pod-projected-configmaps-c4587deb-144c-4535-9965-42c74a305f47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.576469353s Dec 28 21:46:34.180: INFO: Pod "pod-projected-configmaps-c4587deb-144c-4535-9965-42c74a305f47": Phase="Pending", Reason="", readiness=false. Elapsed: 4.591325534s Dec 28 21:46:36.188: INFO: Pod "pod-projected-configmaps-c4587deb-144c-4535-9965-42c74a305f47": Phase="Pending", Reason="", readiness=false. Elapsed: 6.599004242s Dec 28 21:46:38.197: INFO: Pod "pod-projected-configmaps-c4587deb-144c-4535-9965-42c74a305f47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.607847233s STEP: Saw pod success Dec 28 21:46:38.197: INFO: Pod "pod-projected-configmaps-c4587deb-144c-4535-9965-42c74a305f47" satisfied condition "success or failure" Dec 28 21:46:38.203: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-c4587deb-144c-4535-9965-42c74a305f47 container projected-configmap-volume-test: STEP: delete the pod Dec 28 21:46:38.279: INFO: Waiting for pod pod-projected-configmaps-c4587deb-144c-4535-9965-42c74a305f47 to disappear Dec 28 21:46:38.292: INFO: Pod pod-projected-configmaps-c4587deb-144c-4535-9965-42c74a305f47 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:46:38.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9515" for this suite. • [SLOW TEST:8.859 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":89,"skipped":1376,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:46:38.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 28 21:46:38.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Dec 28 21:46:38.640: INFO: stderr: "" Dec 28 21:46:38.641: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:10:40Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"16\", GitVersion:\"v1.16.1\", GitCommit:\"d647ddbd755faf07169599a625faf302ffc34458\", GitTreeState:\"clean\", BuildDate:\"2019-10-02T16:51:36Z\", GoVersion:\"go1.12.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:46:38.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4679" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":90,"skipped":1383,"failed":0} SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:46:38.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 28 21:46:38.861: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Dec 28 21:46:38.902: INFO: Number of nodes with available pods: 0 Dec 28 21:46:38.902: INFO: Node jerma-node is running more than one daemon pod Dec 28 21:46:40.064: INFO: Number of nodes with available pods: 0 Dec 28 21:46:40.064: INFO: Node jerma-node is running more than one daemon pod Dec 28 21:46:41.132: INFO: Number of nodes with available pods: 0 Dec 28 21:46:41.132: INFO: Node jerma-node is running more than one daemon pod Dec 28 21:46:41.934: INFO: Number of nodes with available pods: 0 Dec 28 21:46:41.934: INFO: Node jerma-node is running more than one daemon pod Dec 28 21:46:44.384: INFO: Number of nodes with available pods: 0 Dec 28 21:46:44.384: INFO: Node jerma-node is running more than one daemon pod Dec 28 21:46:44.930: INFO: Number of nodes with available pods: 0 Dec 28 21:46:44.930: INFO: Node jerma-node is running more than one daemon pod Dec 28 21:46:46.090: INFO: Number of nodes with available pods: 0 Dec 28 21:46:46.091: INFO: Node jerma-node is running more than one daemon pod Dec 28 21:46:46.916: INFO: Number of nodes with available pods: 2 Dec 28 21:46:46.916: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Dec 28 21:46:46.944: INFO: Wrong image for pod: daemon-set-q5t9s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 28 21:46:46.944: INFO: Wrong image for pod: daemon-set-w8dxk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 28 21:46:47.992: INFO: Wrong image for pod: daemon-set-q5t9s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 28 21:46:47.992: INFO: Wrong image for pod: daemon-set-w8dxk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 28 21:46:49.377: INFO: Wrong image for pod: daemon-set-q5t9s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 28 21:46:49.377: INFO: Wrong image for pod: daemon-set-w8dxk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 28 21:46:49.986: INFO: Wrong image for pod: daemon-set-q5t9s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 28 21:46:49.986: INFO: Wrong image for pod: daemon-set-w8dxk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 28 21:46:50.989: INFO: Wrong image for pod: daemon-set-q5t9s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 28 21:46:50.989: INFO: Wrong image for pod: daemon-set-w8dxk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 28 21:46:51.990: INFO: Wrong image for pod: daemon-set-q5t9s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 28 21:46:51.991: INFO: Wrong image for pod: daemon-set-w8dxk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 28 21:46:52.988: INFO: Wrong image for pod: daemon-set-q5t9s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 28 21:46:52.988: INFO: Pod daemon-set-q5t9s is not available Dec 28 21:46:52.988: INFO: Wrong image for pod: daemon-set-w8dxk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 28 21:46:53.992: INFO: Pod daemon-set-fjn58 is not available Dec 28 21:46:53.992: INFO: Wrong image for pod: daemon-set-w8dxk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 28 21:46:54.990: INFO: Pod daemon-set-fjn58 is not available Dec 28 21:46:54.990: INFO: Wrong image for pod: daemon-set-w8dxk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 28 21:46:55.993: INFO: Pod daemon-set-fjn58 is not available Dec 28 21:46:55.993: INFO: Wrong image for pod: daemon-set-w8dxk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 28 21:46:57.283: INFO: Pod daemon-set-fjn58 is not available Dec 28 21:46:57.283: INFO: Wrong image for pod: daemon-set-w8dxk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 28 21:46:57.988: INFO: Pod daemon-set-fjn58 is not available Dec 28 21:46:57.988: INFO: Wrong image for pod: daemon-set-w8dxk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 28 21:46:59.002: INFO: Pod daemon-set-fjn58 is not available Dec 28 21:46:59.003: INFO: Wrong image for pod: daemon-set-w8dxk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 28 21:46:59.994: INFO: Wrong image for pod: daemon-set-w8dxk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 28 21:47:00.999: INFO: Wrong image for pod: daemon-set-w8dxk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 28 21:47:01.990: INFO: Wrong image for pod: daemon-set-w8dxk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 28 21:47:02.989: INFO: Wrong image for pod: daemon-set-w8dxk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 28 21:47:03.996: INFO: Wrong image for pod: daemon-set-w8dxk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 28 21:47:03.996: INFO: Pod daemon-set-w8dxk is not available Dec 28 21:47:04.986: INFO: Pod daemon-set-cxsfx is not available STEP: Check that daemon pods are still running on every node of the cluster. Dec 28 21:47:05.002: INFO: Number of nodes with available pods: 1 Dec 28 21:47:05.002: INFO: Node jerma-node is running more than one daemon pod Dec 28 21:47:06.032: INFO: Number of nodes with available pods: 1 Dec 28 21:47:06.032: INFO: Node jerma-node is running more than one daemon pod Dec 28 21:47:07.016: INFO: Number of nodes with available pods: 1 Dec 28 21:47:07.016: INFO: Node jerma-node is running more than one daemon pod Dec 28 21:47:08.020: INFO: Number of nodes with available pods: 1 Dec 28 21:47:08.020: INFO: Node jerma-node is running more than one daemon pod Dec 28 21:47:09.024: INFO: Number of nodes with available pods: 1 Dec 28 21:47:09.024: INFO: Node jerma-node is running more than one daemon pod Dec 28 21:47:10.019: INFO: Number of nodes with available pods: 1 Dec 28 21:47:10.019: INFO: Node jerma-node is running more than one daemon pod Dec 28 21:47:11.021: INFO: Number of nodes with available pods: 2 Dec 28 21:47:11.022: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6944, will wait for the garbage collector to delete the pods Dec 28 21:47:11.120: INFO: Deleting DaemonSet.extensions daemon-set took: 23.679205ms Dec 28 21:47:11.421: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.076132ms Dec 28 21:47:26.829: INFO: Number of nodes with available pods: 0 Dec 28 21:47:26.829: INFO: Number of running nodes: 0, number of available pods: 0 Dec 28 21:47:26.835: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6944/daemonsets","resourceVersion":"10430025"},"items":null} Dec 28 21:47:26.838: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6944/pods","resourceVersion":"10430025"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:47:26.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6944" for this suite. • [SLOW TEST:48.200 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":91,"skipped":1390,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:47:26.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-7d5a00f7-00cd-4335-bd7d-10841756dfec STEP: Creating a pod to test consume configMaps Dec 28 21:47:26.972: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-241ff3a1-6e24-4e08-8b8a-d0f14c8c6788" in namespace "projected-3080" to be "success or failure" Dec 28 21:47:26.992: INFO: Pod "pod-projected-configmaps-241ff3a1-6e24-4e08-8b8a-d0f14c8c6788": Phase="Pending", Reason="", readiness=false. Elapsed: 19.79364ms Dec 28 21:47:28.999: INFO: Pod "pod-projected-configmaps-241ff3a1-6e24-4e08-8b8a-d0f14c8c6788": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026227472s Dec 28 21:47:31.004: INFO: Pod "pod-projected-configmaps-241ff3a1-6e24-4e08-8b8a-d0f14c8c6788": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031630114s Dec 28 21:47:33.012: INFO: Pod "pod-projected-configmaps-241ff3a1-6e24-4e08-8b8a-d0f14c8c6788": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039383134s Dec 28 21:47:35.020: INFO: Pod "pod-projected-configmaps-241ff3a1-6e24-4e08-8b8a-d0f14c8c6788": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047332314s Dec 28 21:47:37.032: INFO: Pod "pod-projected-configmaps-241ff3a1-6e24-4e08-8b8a-d0f14c8c6788": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.059359027s STEP: Saw pod success Dec 28 21:47:37.032: INFO: Pod "pod-projected-configmaps-241ff3a1-6e24-4e08-8b8a-d0f14c8c6788" satisfied condition "success or failure" Dec 28 21:47:37.037: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-241ff3a1-6e24-4e08-8b8a-d0f14c8c6788 container projected-configmap-volume-test: STEP: delete the pod Dec 28 21:47:37.087: INFO: Waiting for pod pod-projected-configmaps-241ff3a1-6e24-4e08-8b8a-d0f14c8c6788 to disappear Dec 28 21:47:37.096: INFO: Pod pod-projected-configmaps-241ff3a1-6e24-4e08-8b8a-d0f14c8c6788 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:47:37.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3080" for this suite. • [SLOW TEST:10.287 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1408,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:47:37.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 28 21:47:38.406: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Dec 28 21:47:40.440: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166458, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166458, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166458, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166458, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 28 21:47:42.513: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166458, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166458, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166458, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166458, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 28 21:47:44.449: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166458, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166458, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166458, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166458, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 28 21:47:47.510: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:47:47.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9501" for this suite. STEP: Destroying namespace "webhook-9501-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.704 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":93,"skipped":1428,"failed":0} SSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:47:47.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-194b76e4-0a23-46e4-ac6c-27ff26932103 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:47:47.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4829" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":94,"skipped":1435,"failed":0} SS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:47:47.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 28 21:47:48.181: INFO: Creating deployment "webserver-deployment" Dec 28 21:47:48.198: INFO: Waiting for observed generation 1 Dec 28 21:47:50.389: INFO: Waiting for all required pods to come up Dec 28 21:47:50.586: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Dec 28 21:48:18.168: INFO: Waiting for deployment "webserver-deployment" to complete Dec 28 21:48:18.179: INFO: Updating deployment "webserver-deployment" with a non-existent image Dec 28 21:48:18.188: INFO: Updating deployment webserver-deployment Dec 28 21:48:18.188: INFO: Waiting for observed generation 2 Dec 28 21:48:20.277: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Dec 28 21:48:20.290: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Dec 28 21:48:21.171: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Dec 28 21:48:21.595: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Dec 28 21:48:21.595: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Dec 28 21:48:21.600: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Dec 28 21:48:21.606: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Dec 28 21:48:21.606: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Dec 28 21:48:22.063: INFO: Updating deployment webserver-deployment Dec 28 21:48:22.063: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Dec 28 21:48:22.875: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Dec 28 21:48:25.578: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Dec 28 21:48:28.766: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-3946 /apis/apps/v1/namespaces/deployment-3946/deployments/webserver-deployment 11738d2c-e08e-4072-92d5-8e79d3c1fc2b 10430497 3 2019-12-28 21:47:48 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00450c028 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2019-12-28 21:48:22 +0000 UTC,LastTransitionTime:2019-12-28 21:48:22 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2019-12-28 21:48:28 +0000 UTC,LastTransitionTime:2019-12-28 21:47:48 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Dec 28 21:48:29.703: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-3946 /apis/apps/v1/namespaces/deployment-3946/replicasets/webserver-deployment-c7997dcc8 87ef5947-bc3c-4146-b491-ae7b04f4bcfd 10430495 3 2019-12-28 21:48:18 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 11738d2c-e08e-4072-92d5-8e79d3c1fc2b 0xc00450c557 0xc00450c558}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00450c5c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Dec 28 21:48:29.703: INFO: All old ReplicaSets of Deployment "webserver-deployment": Dec 28 21:48:29.703: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-3946 /apis/apps/v1/namespaces/deployment-3946/replicasets/webserver-deployment-595b5b9587 03e96ece-d32b-4a55-a514-52280b0724b5 10430488 3 2019-12-28 21:47:48 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 11738d2c-e08e-4072-92d5-8e79d3c1fc2b 0xc00450c497 0xc00450c498}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00450c4f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Dec 28 21:48:31.329: INFO: Pod "webserver-deployment-595b5b9587-7pksq" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7pksq webserver-deployment-595b5b9587- deployment-3946 /api/v1/namespaces/deployment-3946/pods/webserver-deployment-595b5b9587-7pksq 09ab6d1b-6ec1-4c88-97db-fe648f8dd72b 10430474 0 2019-12-28 21:48:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 03e96ece-d32b-4a55-a514-52280b0724b5 0xc00450d987 0xc00450d988}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42f2d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42f2d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42f2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 28 21:48:31.329: INFO: Pod "webserver-deployment-595b5b9587-bf9cb" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bf9cb webserver-deployment-595b5b9587- deployment-3946 /api/v1/namespaces/deployment-3946/pods/webserver-deployment-595b5b9587-bf9cb 41e44ae4-47a3-49d4-bfdf-e324371225e7 10430476 0 2019-12-28 21:48:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 03e96ece-d32b-4a55-a514-52280b0724b5 0xc00450dac7 0xc00450dac8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42f2d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42f2d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42f2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 28 21:48:31.330: INFO: Pod "webserver-deployment-595b5b9587-ctvqr" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ctvqr webserver-deployment-595b5b9587- deployment-3946 /api/v1/namespaces/deployment-3946/pods/webserver-deployment-595b5b9587-ctvqr 2f3e628d-52fd-4499-818d-f23458d08293 10430478 0 2019-12-28 21:48:22 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 03e96ece-d32b-4a55-a514-52280b0724b5 0xc00450dbf7 0xc00450dbf8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42f2d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42f2d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42f2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.170,PodIP:,StartTime:2019-12-28 21:48:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 28 21:48:31.330: INFO: Pod "webserver-deployment-595b5b9587-dnshh" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dnshh webserver-deployment-595b5b9587- deployment-3946 /api/v1/namespaces/deployment-3946/pods/webserver-deployment-595b5b9587-dnshh 4c402917-c565-49d0-a3d0-ba6d83b4b9ff 10430450 0 2019-12-28 21:48:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 03e96ece-d32b-4a55-a514-52280b0724b5 0xc00450dd57 0xc00450dd58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42f2d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42f2d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42f2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 28 21:48:31.331: INFO: Pod "webserver-deployment-595b5b9587-dsjw4" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dsjw4 webserver-deployment-595b5b9587- deployment-3946 /api/v1/namespaces/deployment-3946/pods/webserver-deployment-595b5b9587-dsjw4 84a06466-d095-4c9b-8aac-4c3734f3f14b 10430443 0 2019-12-28 21:48:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 03e96ece-d32b-4a55-a514-52280b0724b5 0xc00450de97 0xc00450de98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42f2d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42f2d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42f2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 28 21:48:31.331: INFO: Pod "webserver-deployment-595b5b9587-f6j4q" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-f6j4q webserver-deployment-595b5b9587- deployment-3946 /api/v1/namespaces/deployment-3946/pods/webserver-deployment-595b5b9587-f6j4q 20685277-54f6-41d4-9b09-5e1d61bc4547 10430475 0 2019-12-28 21:48:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 03e96ece-d32b-4a55-a514-52280b0724b5 0xc00450dfa7 0xc00450dfa8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42f2d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42f2d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42f2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 28 21:48:31.332: INFO: Pod "webserver-deployment-595b5b9587-gmtnl" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-gmtnl webserver-deployment-595b5b9587- deployment-3946 /api/v1/namespaces/deployment-3946/pods/webserver-deployment-595b5b9587-gmtnl 91aaf62d-760c-4b63-9f2b-152a14aff872 10430294 0 2019-12-28 21:47:48 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 03e96ece-d32b-4a55-a514-52280b0724b5 0xc004f000b7 0xc004f000b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42f2d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42f2d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42f2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:47:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:47:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.3.35,PodIP:10.32.0.7,StartTime:2019-12-28 21:47:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2019-12-28 21:48:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://d92170ebb7327a48b45367c03750d51b9823f6981600c289b1db01bf1e4b1c3c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.7,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 28 21:48:31.332: INFO: Pod "webserver-deployment-595b5b9587-kfs2x" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-kfs2x webserver-deployment-595b5b9587- deployment-3946 /api/v1/namespaces/deployment-3946/pods/webserver-deployment-595b5b9587-kfs2x ea485e15-e6d2-4c79-80c2-b497d828f087 10430446 0 2019-12-28 21:48:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 03e96ece-d32b-4a55-a514-52280b0724b5 0xc004f00247 0xc004f00248}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42f2d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42f2d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42f2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 28 21:48:31.333: INFO: Pod "webserver-deployment-595b5b9587-lc6w5" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-lc6w5 webserver-deployment-595b5b9587- deployment-3946 /api/v1/namespaces/deployment-3946/pods/webserver-deployment-595b5b9587-lc6w5 a6a30fdb-a6b0-4aad-94e1-7e779bccab03 10430473 0 2019-12-28 21:48:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 03e96ece-d32b-4a55-a514-52280b0724b5 0xc004f00377 0xc004f00378}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42f2d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42f2d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42f2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 28 21:48:31.333: INFO: Pod "webserver-deployment-595b5b9587-lwx6h" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-lwx6h webserver-deployment-595b5b9587- deployment-3946 /api/v1/namespaces/deployment-3946/pods/webserver-deployment-595b5b9587-lwx6h b56b2f68-25a3-4154-9bc0-511e85f1ff4f 10430300 0 2019-12-28 21:47:48 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 03e96ece-d32b-4a55-a514-52280b0724b5 0xc004f00497 0xc004f00498}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42f2d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42f2d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42f2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:47:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:47:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.3.35,PodIP:10.32.0.5,StartTime:2019-12-28 21:47:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2019-12-28 21:48:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://c8172405b07d26266cf62e3f4145f2ef5e491029a1c29eff3f5d13386d1fe725,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 28 21:48:31.334: INFO: Pod "webserver-deployment-595b5b9587-m4fz5" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-m4fz5 webserver-deployment-595b5b9587- deployment-3946 /api/v1/namespaces/deployment-3946/pods/webserver-deployment-595b5b9587-m4fz5 2832fa63-87fe-4f80-b512-0412a102b6a6 10430337 0 2019-12-28 21:47:48 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 03e96ece-d32b-4a55-a514-52280b0724b5 0xc004f00607 0xc004f00608}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42f2d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42f2d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42f2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:47:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:47:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.170,PodIP:10.44.0.3,StartTime:2019-12-28 21:47:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2019-12-28 21:48:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://66257de6bfce17a37e45736acb11a880ca35ae2a9b4e25ecb882baa8c54b9810,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.3,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 28 21:48:31.334: INFO: Pod "webserver-deployment-595b5b9587-q5sw2" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-q5sw2 webserver-deployment-595b5b9587- deployment-3946 /api/v1/namespaces/deployment-3946/pods/webserver-deployment-595b5b9587-q5sw2 cddcc499-64b0-45cc-85af-48f1338d376f 10430494 0 2019-12-28 21:48:22 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 03e96ece-d32b-4a55-a514-52280b0724b5 0xc004f00787 0xc004f00788}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42f2d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42f2d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42f2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.3.35,PodIP:,StartTime:2019-12-28 21:48:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 28 21:48:31.335: INFO: Pod "webserver-deployment-595b5b9587-qm2f6" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qm2f6 webserver-deployment-595b5b9587- deployment-3946 /api/v1/namespaces/deployment-3946/pods/webserver-deployment-595b5b9587-qm2f6 9d9bdef5-4603-4b6a-8d7c-0c9f4d07e2f2 10430341 0 2019-12-28 21:47:48 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 03e96ece-d32b-4a55-a514-52280b0724b5 0xc004f008d7 0xc004f008d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42f2d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42f2d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42f2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:47:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:47:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.170,PodIP:10.44.0.6,StartTime:2019-12-28 21:47:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2019-12-28 21:48:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://246d260890a14cb578f091101175120ebd2cb8c3b877fa0c8d0850035e669642,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 28 21:48:31.335: INFO: Pod "webserver-deployment-595b5b9587-rzqgc" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rzqgc webserver-deployment-595b5b9587- deployment-3946 /api/v1/namespaces/deployment-3946/pods/webserver-deployment-595b5b9587-rzqgc 04929df6-13d7-43da-b10a-4ed6cd8c3cf9 10430493 0 2019-12-28 21:48:22 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 03e96ece-d32b-4a55-a514-52280b0724b5 0xc004f00a57 0xc004f00a58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42f2d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42f2d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42f2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.170,PodIP:,StartTime:2019-12-28 21:48:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 28 21:48:31.336: INFO: Pod "webserver-deployment-595b5b9587-t7wmv" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-t7wmv webserver-deployment-595b5b9587- deployment-3946 /api/v1/namespaces/deployment-3946/pods/webserver-deployment-595b5b9587-t7wmv fe2e61cb-4ffd-4e6e-bc23-4587b43a7994 10430472 0 2019-12-28 21:48:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 03e96ece-d32b-4a55-a514-52280b0724b5 0xc004f00bb7 0xc004f00bb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42f2d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42f2d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42f2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 28 21:48:31.337: INFO: Pod "webserver-deployment-595b5b9587-t8kcj" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-t8kcj webserver-deployment-595b5b9587- deployment-3946 /api/v1/namespaces/deployment-3946/pods/webserver-deployment-595b5b9587-t8kcj 0eb17b56-4433-4994-aee6-ac6d67b77229 10430297 0 2019-12-28 21:47:48 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 03e96ece-d32b-4a55-a514-52280b0724b5 0xc004f00cc7 0xc004f00cc8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42f2d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42f2d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42f2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:47:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:47:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.3.35,PodIP:10.32.0.4,StartTime:2019-12-28 21:47:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2019-12-28 21:48:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://cacb42800e98b4d51053ecd4ace6809ac2b6ecce7dabc737246f063b910db159,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 28 21:48:31.337: INFO: Pod "webserver-deployment-595b5b9587-tt2bj" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-tt2bj webserver-deployment-595b5b9587- deployment-3946 /api/v1/namespaces/deployment-3946/pods/webserver-deployment-595b5b9587-tt2bj 99d49506-6f51-4dcd-8266-5acafe413c28 10430322 0 2019-12-28 21:47:48 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 03e96ece-d32b-4a55-a514-52280b0724b5 0xc004f00e37 0xc004f00e38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42f2d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42f2d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42f2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:47:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:47:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.170,PodIP:10.44.0.1,StartTime:2019-12-28 21:47:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2019-12-28 21:48:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://6ad4eaf34eb39651a3140b6a99d310428050158badc1e24f83167e772fac068e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 28 21:48:31.338: INFO: Pod "webserver-deployment-595b5b9587-v489g" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-v489g webserver-deployment-595b5b9587- deployment-3946 /api/v1/namespaces/deployment-3946/pods/webserver-deployment-595b5b9587-v489g d9119b22-bdc1-4bd4-bed0-6d9c5ec0a6e4 10430303 0 2019-12-28 21:47:48 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 03e96ece-d32b-4a55-a514-52280b0724b5 0xc004f00fb7 0xc004f00fb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42f2d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42f2d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42f2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:47:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:47:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.3.35,PodIP:10.32.0.6,StartTime:2019-12-28 21:47:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2019-12-28 21:48:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://451f8b5cba17610106d229266e6c879bc29108f6367b73d671a5b1b194607cd2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 28 21:48:31.338: INFO: Pod "webserver-deployment-595b5b9587-wsnbx" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wsnbx webserver-deployment-595b5b9587- deployment-3946 /api/v1/namespaces/deployment-3946/pods/webserver-deployment-595b5b9587-wsnbx 8e669f69-975e-4401-b571-ae74dac2afaa 10430449 0 2019-12-28 21:48:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 03e96ece-d32b-4a55-a514-52280b0724b5 0xc004f01127 0xc004f01128}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42f2d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42f2d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42f2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 28 21:48:31.339: INFO: Pod "webserver-deployment-595b5b9587-xp546" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xp546 webserver-deployment-595b5b9587- deployment-3946 /api/v1/namespaces/deployment-3946/pods/webserver-deployment-595b5b9587-xp546 7ddda3d2-b6f5-41d1-b4e0-9c8e35244e7d 10430335 0 2019-12-28 21:47:48 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 03e96ece-d32b-4a55-a514-52280b0724b5 0xc004f01247 0xc004f01248}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42f2d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42f2d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42f2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:47:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:47:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.170,PodIP:10.44.0.5,StartTime:2019-12-28 21:47:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2019-12-28 21:48:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://8202fb922187a401e213a6a333581d73e158f2966f20e1a8d7d0bc1faf31b82b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 28 21:48:31.339: INFO: Pod "webserver-deployment-c7997dcc8-262tr" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-262tr webserver-deployment-c7997dcc8- deployment-3946 /api/v1/namespaces/deployment-3946/pods/webserver-deployment-c7997dcc8-262tr 3bbcab9b-6dc3-436a-8466-b2301bb16488 10430444 0 2019-12-28 21:48:23 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 87ef5947-bc3c-4146-b491-ae7b04f4bcfd 0xc004f013c7 0xc004f013c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42f2d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42f2d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42f2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 28 21:48:31.340: INFO: Pod "webserver-deployment-c7997dcc8-4gs9f" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4gs9f webserver-deployment-c7997dcc8- deployment-3946 /api/v1/namespaces/deployment-3946/pods/webserver-deployment-c7997dcc8-4gs9f 986d8be1-1d0e-40d9-be6e-3d10abd236fe 10430406 0 2019-12-28 21:48:18 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 87ef5947-bc3c-4146-b491-ae7b04f4bcfd 0xc004f01517 0xc004f01518}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42f2d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42f2d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42f2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.170,PodIP:,StartTime:2019-12-28 21:48:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 28 21:48:31.340: INFO: Pod "webserver-deployment-c7997dcc8-6ffqf" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6ffqf webserver-deployment-c7997dcc8- deployment-3946 /api/v1/namespaces/deployment-3946/pods/webserver-deployment-c7997dcc8-6ffqf bd2e50d2-9279-4bfe-a5fe-323921b28aaf 10430376 0 2019-12-28 21:48:18 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 87ef5947-bc3c-4146-b491-ae7b04f4bcfd 0xc004f01697 0xc004f01698}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42f2d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42f2d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42f2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.3.35,PodIP:,StartTime:2019-12-28 21:48:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 28 21:48:31.341: INFO: Pod "webserver-deployment-c7997dcc8-7hbhf" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-7hbhf webserver-deployment-c7997dcc8- deployment-3946 /api/v1/namespaces/deployment-3946/pods/webserver-deployment-c7997dcc8-7hbhf e7c01690-2f75-4213-b9ed-3653a0f492de 10430479 0 2019-12-28 21:48:22 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 87ef5947-bc3c-4146-b491-ae7b04f4bcfd 0xc004f01807 0xc004f01808}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42f2d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42f2d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42f2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.3.35,PodIP:,StartTime:2019-12-28 21:48:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 28 21:48:31.341: INFO: Pod "webserver-deployment-c7997dcc8-8n5bw" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8n5bw webserver-deployment-c7997dcc8- deployment-3946 /api/v1/namespaces/deployment-3946/pods/webserver-deployment-c7997dcc8-8n5bw 3c735ab7-9656-41f6-8b3c-ecae396d39ca 10430501 0 2019-12-28 21:48:23 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 87ef5947-bc3c-4146-b491-ae7b04f4bcfd 0xc004f01977 0xc004f01978}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42f2d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42f2d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42f2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.3.35,PodIP:,StartTime:2019-12-28 21:48:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 28 21:48:31.341: INFO: Pod "webserver-deployment-c7997dcc8-btg4s" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-btg4s webserver-deployment-c7997dcc8- deployment-3946 /api/v1/namespaces/deployment-3946/pods/webserver-deployment-c7997dcc8-btg4s 80c392b2-6427-43b8-a170-77ae1affac63 10430404 0 2019-12-28 21:48:18 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 87ef5947-bc3c-4146-b491-ae7b04f4bcfd 0xc004f01af7 0xc004f01af8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42f2d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42f2d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42f2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.3.35,PodIP:,StartTime:2019-12-28 21:48:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 28 21:48:31.342: INFO: Pod "webserver-deployment-c7997dcc8-dht6p" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dht6p webserver-deployment-c7997dcc8- deployment-3946 /api/v1/namespaces/deployment-3946/pods/webserver-deployment-c7997dcc8-dht6p 89add649-0291-41e3-a3eb-21e93300af2c 10430500 0 2019-12-28 21:48:22 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 87ef5947-bc3c-4146-b491-ae7b04f4bcfd 0xc004f01c67 0xc004f01c68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42f2d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42f2d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42f2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.170,PodIP:,StartTime:2019-12-28 21:48:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 28 21:48:31.342: INFO: Pod "webserver-deployment-c7997dcc8-dxkwj" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dxkwj webserver-deployment-c7997dcc8- deployment-3946 /api/v1/namespaces/deployment-3946/pods/webserver-deployment-c7997dcc8-dxkwj 3ff88e84-706f-434f-85d7-fed5323d25aa 10430461 0 2019-12-28 21:48:23 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 87ef5947-bc3c-4146-b491-ae7b04f4bcfd 0xc004f01de7 0xc004f01de8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42f2d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42f2d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42f2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 28 21:48:31.343: INFO: Pod "webserver-deployment-c7997dcc8-f4b7f" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-f4b7f webserver-deployment-c7997dcc8- deployment-3946 /api/v1/namespaces/deployment-3946/pods/webserver-deployment-c7997dcc8-f4b7f f58f9e97-16b7-4307-9ae6-3f4893b687ab 10430471 0 2019-12-28 21:48:23 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 87ef5947-bc3c-4146-b491-ae7b04f4bcfd 0xc004f01f17 0xc004f01f18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42f2d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42f2d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42f2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 28 21:48:31.343: INFO: Pod "webserver-deployment-c7997dcc8-fqjrb" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fqjrb webserver-deployment-c7997dcc8- deployment-3946 /api/v1/namespaces/deployment-3946/pods/webserver-deployment-c7997dcc8-fqjrb ac72ec47-6208-477c-8672-e45acbbe5a84 10430380 0 2019-12-28 21:48:18 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 87ef5947-bc3c-4146-b491-ae7b04f4bcfd 0xc004890037 0xc004890038}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42f2d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42f2d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42f2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.170,PodIP:,StartTime:2019-12-28 21:48:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 28 21:48:31.343: INFO: Pod "webserver-deployment-c7997dcc8-fvcjx" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fvcjx webserver-deployment-c7997dcc8- deployment-3946 /api/v1/namespaces/deployment-3946/pods/webserver-deployment-c7997dcc8-fvcjx 3874e67d-0486-426c-91ca-5be6a1be5fc8 10430448 0 2019-12-28 21:48:23 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 87ef5947-bc3c-4146-b491-ae7b04f4bcfd 0xc0048901b7 0xc0048901b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42f2d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42f2d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42f2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 28 21:48:31.343: INFO: Pod "webserver-deployment-c7997dcc8-gjmrw" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-gjmrw webserver-deployment-c7997dcc8- deployment-3946 /api/v1/namespaces/deployment-3946/pods/webserver-deployment-c7997dcc8-gjmrw e23895ca-bb7f-4de9-8b1c-88d21231a6dc 10430390 0 2019-12-28 21:48:18 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 87ef5947-bc3c-4146-b491-ae7b04f4bcfd 0xc0048902e7 0xc0048902e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42f2d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42f2d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42f2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.170,PodIP:,StartTime:2019-12-28 21:48:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 28 21:48:31.344: INFO: Pod "webserver-deployment-c7997dcc8-vnqrj" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-vnqrj webserver-deployment-c7997dcc8- deployment-3946 /api/v1/namespaces/deployment-3946/pods/webserver-deployment-c7997dcc8-vnqrj 8cdb19ac-b523-44ee-a15d-ab8ab8b459ca 10430445 0 2019-12-28 21:48:22 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 87ef5947-bc3c-4146-b491-ae7b04f4bcfd 0xc0048905a7 0xc0048905a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42f2d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42f2d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42f2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 21:48:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.3.35,PodIP:,StartTime:2019-12-28 21:48:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:48:31.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3946" for this suite. • [SLOW TEST:44.540 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":95,"skipped":1437,"failed":0} SSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:48:32.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-5011 STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 28 21:48:36.147: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Dec 28 21:49:42.960: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5011 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 28 21:49:42.961: INFO: >>> kubeConfig: /root/.kube/config Dec 28 21:49:43.171: INFO: Found all expected endpoints: [netserver-0] Dec 28 21:49:43.199: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5011 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 28 21:49:43.199: INFO: >>> kubeConfig: /root/.kube/config Dec 28 21:49:43.376: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:49:43.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5011" for this suite. • [SLOW TEST:70.849 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":96,"skipped":1444,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:49:43.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 28 21:49:43.498: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-02ea7382-0670-483f-9769-bb8aae46dd09" in namespace "security-context-test-6617" to be "success or failure" Dec 28 21:49:43.531: INFO: Pod "alpine-nnp-false-02ea7382-0670-483f-9769-bb8aae46dd09": Phase="Pending", Reason="", readiness=false. Elapsed: 32.331937ms Dec 28 21:49:45.540: INFO: Pod "alpine-nnp-false-02ea7382-0670-483f-9769-bb8aae46dd09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041668418s Dec 28 21:49:47.547: INFO: Pod "alpine-nnp-false-02ea7382-0670-483f-9769-bb8aae46dd09": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048351317s Dec 28 21:49:49.862: INFO: Pod "alpine-nnp-false-02ea7382-0670-483f-9769-bb8aae46dd09": Phase="Pending", Reason="", readiness=false. Elapsed: 6.364038891s Dec 28 21:49:51.878: INFO: Pod "alpine-nnp-false-02ea7382-0670-483f-9769-bb8aae46dd09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.379803793s Dec 28 21:49:51.878: INFO: Pod "alpine-nnp-false-02ea7382-0670-483f-9769-bb8aae46dd09" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:49:51.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6617" for this suite. • [SLOW TEST:9.410 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when creating containers with AllowPrivilegeEscalation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":97,"skipped":1458,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:49:52.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:50:04.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8053" for this suite. • [SLOW TEST:11.460 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":98,"skipped":1466,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:50:04.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-c08f3563-fa95-4efc-86a9-3496ac97c5cf STEP: Creating secret with name s-test-opt-upd-89d075b9-cc2a-4a96-9552-715a5113ec43 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-c08f3563-fa95-4efc-86a9-3496ac97c5cf STEP: Updating secret s-test-opt-upd-89d075b9-cc2a-4a96-9552-715a5113ec43 STEP: Creating secret with name s-test-opt-create-69117ee9-2e84-45f8-bdf5-889c514da4c6 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:51:33.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4297" for this suite. • [SLOW TEST:89.627 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1481,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:51:33.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:51:34.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8086" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":100,"skipped":1517,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:51:34.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:52:10.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7664" for this suite. STEP: Destroying namespace "nsdeletetest-2016" for this suite. Dec 28 21:52:10.566: INFO: Namespace nsdeletetest-2016 was already deleted STEP: Destroying namespace "nsdeletetest-9644" for this suite. • [SLOW TEST:36.499 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":101,"skipped":1531,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:52:10.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Dec 28 21:52:10.811: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Dec 28 21:52:10.822: INFO: Waiting for terminating namespaces to be deleted... Dec 28 21:52:10.825: INFO: Logging pods the kubelet thinks is on node jerma-node before test Dec 28 21:52:10.834: INFO: kube-proxy-jcjl4 from kube-system started at 2019-10-12 13:47:49 +0000 UTC (1 container statuses recorded) Dec 28 21:52:10.834: INFO: Container kube-proxy ready: true, restart count 0 Dec 28 21:52:10.834: INFO: weave-net-srfjj from kube-system started at 2019-12-17 21:23:16 +0000 UTC (2 container statuses recorded) Dec 28 21:52:10.834: INFO: Container weave ready: true, restart count 0 Dec 28 21:52:10.834: INFO: Container weave-npc ready: true, restart count 0 Dec 28 21:52:10.834: INFO: Logging pods the kubelet thinks is on node jerma-server-4b75xjbddvit before test Dec 28 21:52:10.867: INFO: kube-proxy-bdcvr from kube-system started at 2019-12-13 09:08:20 +0000 UTC (1 container statuses recorded) Dec 28 21:52:10.867: INFO: Container kube-proxy ready: true, restart count 0 Dec 28 21:52:10.867: INFO: kube-scheduler-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:42 +0000 UTC (1 container statuses recorded) Dec 28 21:52:10.867: INFO: Container kube-scheduler ready: true, restart count 16 Dec 28 21:52:10.867: INFO: coredns-5644d7b6d9-xvlxj from kube-system started at 2019-12-14 16:49:52 +0000 UTC (1 container statuses recorded) Dec 28 21:52:10.867: INFO: Container coredns ready: true, restart count 0 Dec 28 21:52:10.867: INFO: etcd-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:37 +0000 UTC (1 container statuses recorded) Dec 28 21:52:10.867: INFO: Container etcd ready: true, restart count 1 Dec 28 21:52:10.867: INFO: kube-controller-manager-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:40 +0000 UTC (1 container statuses recorded) Dec 28 21:52:10.867: INFO: Container kube-controller-manager ready: true, restart count 13 Dec 28 21:52:10.867: INFO: coredns-5644d7b6d9-n9kkw from kube-system started at 2019-11-10 16:39:08 +0000 UTC (0 container statuses recorded) Dec 28 21:52:10.867: INFO: kube-apiserver-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:38 +0000 UTC (1 container statuses recorded) Dec 28 21:52:10.867: INFO: Container kube-apiserver ready: true, restart count 1 Dec 28 21:52:10.867: INFO: weave-net-gsjjk from kube-system started at 2019-12-13 09:16:56 +0000 UTC (2 container statuses recorded) Dec 28 21:52:10.867: INFO: Container weave ready: true, restart count 0 Dec 28 21:52:10.867: INFO: Container weave-npc ready: true, restart count 0 Dec 28 21:52:10.867: INFO: coredns-5644d7b6d9-rqwzj from kube-system started at 2019-11-10 18:03:38 +0000 UTC (0 container statuses recorded) Dec 28 21:52:10.867: INFO: coredns-5644d7b6d9-9sj58 from kube-system started at 2019-12-14 15:12:12 +0000 UTC (1 container statuses recorded) Dec 28 21:52:10.867: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15e4a77e6b4d4b7c], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.15e4a77e7108711d], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:52:11.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3832" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":102,"skipped":1547,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:52:11.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-d38a1086-f359-4619-8322-2277c803ecc7 STEP: Creating a pod to test consume configMaps Dec 28 21:52:12.019: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cb405acd-1dc1-4bd4-bd95-089e99e8169d" in namespace "projected-2792" to be "success or failure" Dec 28 21:52:12.044: INFO: Pod "pod-projected-configmaps-cb405acd-1dc1-4bd4-bd95-089e99e8169d": Phase="Pending", Reason="", readiness=false. Elapsed: 25.772939ms Dec 28 21:52:14.055: INFO: Pod "pod-projected-configmaps-cb405acd-1dc1-4bd4-bd95-089e99e8169d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036256797s Dec 28 21:52:16.064: INFO: Pod "pod-projected-configmaps-cb405acd-1dc1-4bd4-bd95-089e99e8169d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045534581s Dec 28 21:52:18.071: INFO: Pod "pod-projected-configmaps-cb405acd-1dc1-4bd4-bd95-089e99e8169d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052258092s Dec 28 21:52:20.093: INFO: Pod "pod-projected-configmaps-cb405acd-1dc1-4bd4-bd95-089e99e8169d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.074293592s STEP: Saw pod success Dec 28 21:52:20.093: INFO: Pod "pod-projected-configmaps-cb405acd-1dc1-4bd4-bd95-089e99e8169d" satisfied condition "success or failure" Dec 28 21:52:20.097: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-cb405acd-1dc1-4bd4-bd95-089e99e8169d container projected-configmap-volume-test: STEP: delete the pod Dec 28 21:52:20.139: INFO: Waiting for pod pod-projected-configmaps-cb405acd-1dc1-4bd4-bd95-089e99e8169d to disappear Dec 28 21:52:20.152: INFO: Pod pod-projected-configmaps-cb405acd-1dc1-4bd4-bd95-089e99e8169d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:52:20.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2792" for this suite. • [SLOW TEST:8.271 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1559,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:52:20.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8995.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8995.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 28 21:52:32.863: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-8995/dns-test-28191594-b564-47a6-9203-da863a08d993: the server could not find the requested resource (get pods dns-test-28191594-b564-47a6-9203-da863a08d993) Dec 28 21:52:32.876: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-8995/dns-test-28191594-b564-47a6-9203-da863a08d993: the server could not find the requested resource (get pods dns-test-28191594-b564-47a6-9203-da863a08d993) Dec 28 21:52:32.889: INFO: Unable to read wheezy_udp@PodARecord from pod dns-8995/dns-test-28191594-b564-47a6-9203-da863a08d993: the server could not find the requested resource (get pods dns-test-28191594-b564-47a6-9203-da863a08d993) Dec 28 21:52:32.904: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-8995/dns-test-28191594-b564-47a6-9203-da863a08d993: the server could not find the requested resource (get pods dns-test-28191594-b564-47a6-9203-da863a08d993) Dec 28 21:52:32.911: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-8995/dns-test-28191594-b564-47a6-9203-da863a08d993: the server could not find the requested resource (get pods dns-test-28191594-b564-47a6-9203-da863a08d993) Dec 28 21:52:32.917: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-8995/dns-test-28191594-b564-47a6-9203-da863a08d993: the server could not find the requested resource (get pods dns-test-28191594-b564-47a6-9203-da863a08d993) Dec 28 21:52:32.921: INFO: Unable to read jessie_udp@PodARecord from pod dns-8995/dns-test-28191594-b564-47a6-9203-da863a08d993: the server could not find the requested resource (get pods dns-test-28191594-b564-47a6-9203-da863a08d993) Dec 28 21:52:32.925: INFO: Unable to read jessie_tcp@PodARecord from pod dns-8995/dns-test-28191594-b564-47a6-9203-da863a08d993: the server could not find the requested resource (get pods dns-test-28191594-b564-47a6-9203-da863a08d993) Dec 28 21:52:32.925: INFO: Lookups using dns-8995/dns-test-28191594-b564-47a6-9203-da863a08d993 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord] Dec 28 21:52:37.979: INFO: DNS probes using dns-8995/dns-test-28191594-b564-47a6-9203-da863a08d993 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:52:38.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8995" for this suite. • [SLOW TEST:17.869 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":104,"skipped":1578,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:52:38.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-ac64f25d-ba42-426b-a16d-e239b1586d06 STEP: Creating a pod to test consume secrets Dec 28 21:52:38.238: INFO: Waiting up to 5m0s for pod "pod-secrets-edb279d3-486c-4438-a63b-36fdc8dfa63a" in namespace "secrets-2367" to be "success or failure" Dec 28 21:52:38.243: INFO: Pod "pod-secrets-edb279d3-486c-4438-a63b-36fdc8dfa63a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.243071ms Dec 28 21:52:40.248: INFO: Pod "pod-secrets-edb279d3-486c-4438-a63b-36fdc8dfa63a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010684281s Dec 28 21:52:42.271: INFO: Pod "pod-secrets-edb279d3-486c-4438-a63b-36fdc8dfa63a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033099218s Dec 28 21:52:44.309: INFO: Pod "pod-secrets-edb279d3-486c-4438-a63b-36fdc8dfa63a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071239455s Dec 28 21:52:46.316: INFO: Pod "pod-secrets-edb279d3-486c-4438-a63b-36fdc8dfa63a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.078718019s Dec 28 21:52:48.324: INFO: Pod "pod-secrets-edb279d3-486c-4438-a63b-36fdc8dfa63a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.085983057s STEP: Saw pod success Dec 28 21:52:48.324: INFO: Pod "pod-secrets-edb279d3-486c-4438-a63b-36fdc8dfa63a" satisfied condition "success or failure" Dec 28 21:52:48.328: INFO: Trying to get logs from node jerma-node pod pod-secrets-edb279d3-486c-4438-a63b-36fdc8dfa63a container secret-volume-test: STEP: delete the pod Dec 28 21:52:48.404: INFO: Waiting for pod pod-secrets-edb279d3-486c-4438-a63b-36fdc8dfa63a to disappear Dec 28 21:52:48.412: INFO: Pod pod-secrets-edb279d3-486c-4438-a63b-36fdc8dfa63a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:52:48.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2367" for this suite. • [SLOW TEST:10.351 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":105,"skipped":1582,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:52:48.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 28 21:52:49.880: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Dec 28 21:52:52.071: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166769, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166769, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166769, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166769, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 28 21:52:54.087: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166769, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166769, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166769, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166769, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 28 21:52:56.079: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166769, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166769, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166769, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166769, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 28 21:52:59.126: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:53:11.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6954" for this suite. STEP: Destroying namespace "webhook-6954-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:23.356 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":106,"skipped":1613,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:53:11.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Dec 28 21:53:11.886: INFO: Waiting up to 5m0s for pod "pod-9f52eda8-30ab-444d-8aea-17d00198db06" in namespace "emptydir-5084" to be "success or failure" Dec 28 21:53:11.895: INFO: Pod "pod-9f52eda8-30ab-444d-8aea-17d00198db06": Phase="Pending", Reason="", readiness=false. Elapsed: 8.290402ms Dec 28 21:53:13.910: INFO: Pod "pod-9f52eda8-30ab-444d-8aea-17d00198db06": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023326953s Dec 28 21:53:15.925: INFO: Pod "pod-9f52eda8-30ab-444d-8aea-17d00198db06": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03844711s Dec 28 21:53:17.939: INFO: Pod "pod-9f52eda8-30ab-444d-8aea-17d00198db06": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052302375s Dec 28 21:53:19.948: INFO: Pod "pod-9f52eda8-30ab-444d-8aea-17d00198db06": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06146401s Dec 28 21:53:21.957: INFO: Pod "pod-9f52eda8-30ab-444d-8aea-17d00198db06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.070767958s STEP: Saw pod success Dec 28 21:53:21.957: INFO: Pod "pod-9f52eda8-30ab-444d-8aea-17d00198db06" satisfied condition "success or failure" Dec 28 21:53:21.962: INFO: Trying to get logs from node jerma-node pod pod-9f52eda8-30ab-444d-8aea-17d00198db06 container test-container: STEP: delete the pod Dec 28 21:53:22.031: INFO: Waiting for pod pod-9f52eda8-30ab-444d-8aea-17d00198db06 to disappear Dec 28 21:53:22.036: INFO: Pod pod-9f52eda8-30ab-444d-8aea-17d00198db06 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:53:22.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5084" for this suite. • [SLOW TEST:10.273 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":107,"skipped":1637,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:53:22.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 28 21:53:22.586: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Dec 28 21:53:24.599: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166802, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166802, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166802, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166802, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 28 21:53:26.620: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166802, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166802, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166802, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166802, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 28 21:53:28.612: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166802, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166802, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166802, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166802, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 28 21:53:31.684: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:53:31.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5006" for this suite. STEP: Destroying namespace "webhook-5006-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.038 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":108,"skipped":1640,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:53:32.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Dec 28 21:53:32.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2629' Dec 28 21:53:35.938: INFO: stderr: "" Dec 28 21:53:35.938: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 28 21:53:35.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2629' Dec 28 21:53:36.143: INFO: stderr: "" Dec 28 21:53:36.143: INFO: stdout: "update-demo-nautilus-bdj7x update-demo-nautilus-cpt9z " Dec 28 21:53:36.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bdj7x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2629' Dec 28 21:53:36.289: INFO: stderr: "" Dec 28 21:53:36.289: INFO: stdout: "" Dec 28 21:53:36.289: INFO: update-demo-nautilus-bdj7x is created but not running Dec 28 21:53:41.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2629' Dec 28 21:53:42.157: INFO: stderr: "" Dec 28 21:53:42.157: INFO: stdout: "update-demo-nautilus-bdj7x update-demo-nautilus-cpt9z " Dec 28 21:53:42.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bdj7x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2629' Dec 28 21:53:42.647: INFO: stderr: "" Dec 28 21:53:42.647: INFO: stdout: "" Dec 28 21:53:42.647: INFO: update-demo-nautilus-bdj7x is created but not running Dec 28 21:53:47.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2629' Dec 28 21:53:47.862: INFO: stderr: "" Dec 28 21:53:47.862: INFO: stdout: "update-demo-nautilus-bdj7x update-demo-nautilus-cpt9z " Dec 28 21:53:47.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bdj7x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2629' Dec 28 21:53:48.061: INFO: stderr: "" Dec 28 21:53:48.061: INFO: stdout: "true" Dec 28 21:53:48.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bdj7x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2629' Dec 28 21:53:48.143: INFO: stderr: "" Dec 28 21:53:48.144: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 28 21:53:48.144: INFO: validating pod update-demo-nautilus-bdj7x Dec 28 21:53:48.151: INFO: got data: { "image": "nautilus.jpg" } Dec 28 21:53:48.151: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 28 21:53:48.151: INFO: update-demo-nautilus-bdj7x is verified up and running Dec 28 21:53:48.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cpt9z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2629' Dec 28 21:53:48.258: INFO: stderr: "" Dec 28 21:53:48.258: INFO: stdout: "true" Dec 28 21:53:48.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cpt9z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2629' Dec 28 21:53:48.364: INFO: stderr: "" Dec 28 21:53:48.364: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 28 21:53:48.364: INFO: validating pod update-demo-nautilus-cpt9z Dec 28 21:53:48.398: INFO: got data: { "image": "nautilus.jpg" } Dec 28 21:53:48.399: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 28 21:53:48.399: INFO: update-demo-nautilus-cpt9z is verified up and running STEP: using delete to clean up resources Dec 28 21:53:48.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2629' Dec 28 21:53:48.586: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 28 21:53:48.586: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Dec 28 21:53:48.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2629' Dec 28 21:53:48.769: INFO: stderr: "No resources found in kubectl-2629 namespace.\n" Dec 28 21:53:48.769: INFO: stdout: "" Dec 28 21:53:48.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2629 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 28 21:53:48.978: INFO: stderr: "" Dec 28 21:53:48.978: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:53:48.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2629" for this suite. • [SLOW TEST:16.979 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":109,"skipped":1642,"failed":0} S ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:53:49.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy Dec 28 21:53:50.198: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix946808942/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:53:50.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1242" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":110,"skipped":1643,"failed":0} SSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:53:50.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:53:58.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2527" for this suite. • [SLOW TEST:8.246 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":111,"skipped":1650,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:53:58.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Dec 28 21:53:58.785: INFO: Waiting up to 5m0s for pod "pod-1933f8a5-544c-41ce-a074-a5a060d1becb" in namespace "emptydir-1017" to be "success or failure" Dec 28 21:53:58.805: INFO: Pod "pod-1933f8a5-544c-41ce-a074-a5a060d1becb": Phase="Pending", Reason="", readiness=false. Elapsed: 20.0942ms Dec 28 21:54:00.815: INFO: Pod "pod-1933f8a5-544c-41ce-a074-a5a060d1becb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029510709s Dec 28 21:54:02.829: INFO: Pod "pod-1933f8a5-544c-41ce-a074-a5a060d1becb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043517384s Dec 28 21:54:04.838: INFO: Pod "pod-1933f8a5-544c-41ce-a074-a5a060d1becb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052665158s Dec 28 21:54:06.846: INFO: Pod "pod-1933f8a5-544c-41ce-a074-a5a060d1becb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060440264s Dec 28 21:54:08.860: INFO: Pod "pod-1933f8a5-544c-41ce-a074-a5a060d1becb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.074858086s STEP: Saw pod success Dec 28 21:54:08.860: INFO: Pod "pod-1933f8a5-544c-41ce-a074-a5a060d1becb" satisfied condition "success or failure" Dec 28 21:54:08.868: INFO: Trying to get logs from node jerma-node pod pod-1933f8a5-544c-41ce-a074-a5a060d1becb container test-container: STEP: delete the pod Dec 28 21:54:09.161: INFO: Waiting for pod pod-1933f8a5-544c-41ce-a074-a5a060d1becb to disappear Dec 28 21:54:09.167: INFO: Pod pod-1933f8a5-544c-41ce-a074-a5a060d1becb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:54:09.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1017" for this suite. • [SLOW TEST:10.507 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":112,"skipped":1709,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:54:09.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 28 21:54:09.888: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Dec 28 21:54:11.902: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166849, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166849, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166849, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166849, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 28 21:54:13.914: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166849, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166849, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166849, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166849, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 28 21:54:15.955: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166849, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166849, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166849, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713166849, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 28 21:54:18.952: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:54:19.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3768" for this suite. STEP: Destroying namespace "webhook-3768-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.678 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":113,"skipped":1729,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:54:19.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Dec 28 21:54:19.945: INFO: Waiting up to 5m0s for pod "downwardapi-volume-30ef409a-399e-4747-a8c9-f543b92cabf3" in namespace "downward-api-2372" to be "success or failure" Dec 28 21:54:19.999: INFO: Pod "downwardapi-volume-30ef409a-399e-4747-a8c9-f543b92cabf3": Phase="Pending", Reason="", readiness=false. Elapsed: 53.85211ms Dec 28 21:54:22.010: INFO: Pod "downwardapi-volume-30ef409a-399e-4747-a8c9-f543b92cabf3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065625516s Dec 28 21:54:24.024: INFO: Pod "downwardapi-volume-30ef409a-399e-4747-a8c9-f543b92cabf3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079606401s Dec 28 21:54:26.034: INFO: Pod "downwardapi-volume-30ef409a-399e-4747-a8c9-f543b92cabf3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.089632455s Dec 28 21:54:28.045: INFO: Pod "downwardapi-volume-30ef409a-399e-4747-a8c9-f543b92cabf3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.100304347s Dec 28 21:54:30.055: INFO: Pod "downwardapi-volume-30ef409a-399e-4747-a8c9-f543b92cabf3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.109916242s STEP: Saw pod success Dec 28 21:54:30.055: INFO: Pod "downwardapi-volume-30ef409a-399e-4747-a8c9-f543b92cabf3" satisfied condition "success or failure" Dec 28 21:54:30.060: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-30ef409a-399e-4747-a8c9-f543b92cabf3 container client-container: STEP: delete the pod Dec 28 21:54:30.134: INFO: Waiting for pod downwardapi-volume-30ef409a-399e-4747-a8c9-f543b92cabf3 to disappear Dec 28 21:54:30.150: INFO: Pod downwardapi-volume-30ef409a-399e-4747-a8c9-f543b92cabf3 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:54:30.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2372" for this suite. • [SLOW TEST:10.427 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":114,"skipped":1741,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:54:30.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7646 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7646;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7646 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7646;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7646.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7646.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7646.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7646.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7646.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7646.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7646.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7646.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7646.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7646.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7646.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7646.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7646.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 81.83.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.83.81_udp@PTR;check="$$(dig +tcp +noall +answer +search 81.83.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.83.81_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7646 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7646;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7646 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7646;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7646.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7646.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7646.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7646.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7646.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7646.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7646.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7646.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7646.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7646.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7646.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7646.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7646.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 81.83.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.83.81_udp@PTR;check="$$(dig +tcp +noall +answer +search 81.83.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.83.81_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 28 21:54:40.754: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7646/dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084: the server could not find the requested resource (get pods dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084) Dec 28 21:54:40.759: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7646/dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084: the server could not find the requested resource (get pods dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084) Dec 28 21:54:40.764: INFO: Unable to read wheezy_udp@dns-test-service.dns-7646 from pod dns-7646/dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084: the server could not find the requested resource (get pods dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084) Dec 28 21:54:40.772: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7646 from pod dns-7646/dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084: the server could not find the requested resource (get pods dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084) Dec 28 21:54:40.781: INFO: Unable to read wheezy_udp@dns-test-service.dns-7646.svc from pod dns-7646/dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084: the server could not find the requested resource (get pods dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084) Dec 28 21:54:40.786: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7646.svc from pod dns-7646/dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084: the server could not find the requested resource (get pods dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084) Dec 28 21:54:40.792: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7646.svc from pod dns-7646/dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084: the server could not find the requested resource (get pods dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084) Dec 28 21:54:40.800: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7646.svc from pod dns-7646/dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084: the server could not find the requested resource (get pods dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084) Dec 28 21:54:40.808: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-7646.svc from pod dns-7646/dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084: the server could not find the requested resource (get pods dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084) Dec 28 21:54:40.813: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-7646.svc from pod dns-7646/dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084: the server could not find the requested resource (get pods dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084) Dec 28 21:54:40.819: INFO: Unable to read wheezy_udp@PodARecord from pod dns-7646/dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084: the server could not find the requested resource (get pods dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084) Dec 28 21:54:40.826: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7646/dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084: the server could not find the requested resource (get pods dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084) Dec 28 21:54:40.836: INFO: Unable to read 10.106.83.81_udp@PTR from pod dns-7646/dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084: the server could not find the requested resource (get pods dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084) Dec 28 21:54:40.848: INFO: Unable to read 10.106.83.81_tcp@PTR from pod dns-7646/dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084: the server could not find the requested resource (get pods dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084) Dec 28 21:54:40.854: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7646/dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084: the server could not find the requested resource (get pods dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084) Dec 28 21:54:40.860: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7646/dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084: the server could not find the requested resource (get pods dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084) Dec 28 21:54:40.868: INFO: Unable to read jessie_udp@dns-test-service.dns-7646 from pod dns-7646/dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084: the server could not find the requested resource (get pods dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084) Dec 28 21:54:40.880: INFO: Unable to read jessie_tcp@dns-test-service.dns-7646 from pod dns-7646/dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084: the server could not find the requested resource (get pods dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084) Dec 28 21:54:40.896: INFO: Unable to read jessie_udp@dns-test-service.dns-7646.svc from pod dns-7646/dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084: the server could not find the requested resource (get pods dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084) Dec 28 21:54:40.906: INFO: Unable to read jessie_tcp@dns-test-service.dns-7646.svc from pod dns-7646/dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084: the server could not find the requested resource (get pods dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084) Dec 28 21:54:40.916: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7646.svc from pod dns-7646/dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084: the server could not find the requested resource (get pods dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084) Dec 28 21:54:40.924: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7646.svc from pod dns-7646/dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084: the server could not find the requested resource (get pods dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084) Dec 28 21:54:40.929: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-7646.svc from pod dns-7646/dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084: the server could not find the requested resource (get pods dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084) Dec 28 21:54:40.933: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-7646.svc from pod dns-7646/dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084: the server could not find the requested resource (get pods dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084) Dec 28 21:54:40.936: INFO: Unable to read jessie_udp@PodARecord from pod dns-7646/dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084: the server could not find the requested resource (get pods dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084) Dec 28 21:54:40.940: INFO: Unable to read jessie_tcp@PodARecord from pod dns-7646/dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084: the server could not find the requested resource (get pods dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084) Dec 28 21:54:40.944: INFO: Unable to read 10.106.83.81_udp@PTR from pod dns-7646/dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084: the server could not find the requested resource (get pods dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084) Dec 28 21:54:40.947: INFO: Unable to read 10.106.83.81_tcp@PTR from pod dns-7646/dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084: the server could not find the requested resource (get pods dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084) Dec 28 21:54:40.947: INFO: Lookups using dns-7646/dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7646 wheezy_tcp@dns-test-service.dns-7646 wheezy_udp@dns-test-service.dns-7646.svc wheezy_tcp@dns-test-service.dns-7646.svc wheezy_udp@_http._tcp.dns-test-service.dns-7646.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7646.svc wheezy_udp@_http._tcp.test-service-2.dns-7646.svc wheezy_tcp@_http._tcp.test-service-2.dns-7646.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.106.83.81_udp@PTR 10.106.83.81_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7646 jessie_tcp@dns-test-service.dns-7646 jessie_udp@dns-test-service.dns-7646.svc jessie_tcp@dns-test-service.dns-7646.svc jessie_udp@_http._tcp.dns-test-service.dns-7646.svc jessie_tcp@_http._tcp.dns-test-service.dns-7646.svc jessie_udp@_http._tcp.test-service-2.dns-7646.svc jessie_tcp@_http._tcp.test-service-2.dns-7646.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.106.83.81_udp@PTR 10.106.83.81_tcp@PTR] Dec 28 21:54:46.381: INFO: DNS probes using dns-7646/dns-test-609e0180-1b87-4da0-b632-b6cd20d0d084 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:54:46.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7646" for this suite. • [SLOW TEST:16.612 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":115,"skipped":1771,"failed":0} SSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:54:46.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 28 21:55:19.068: INFO: Container started at 2019-12-28 21:54:55 +0000 UTC, pod became ready at 2019-12-28 21:55:18 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:55:19.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-540" for this suite. • [SLOW TEST:32.175 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":1775,"failed":0} SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:55:19.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-9228 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-9228 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9228 Dec 28 21:55:19.245: INFO: Found 0 stateful pods, waiting for 1 Dec 28 21:55:29.282: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Dec 28 21:55:39.286: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Dec 28 21:55:39.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9228 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Dec 28 21:55:39.803: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Dec 28 21:55:39.804: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Dec 28 21:55:39.804: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Dec 28 21:55:39.818: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Dec 28 21:55:49.828: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 28 21:55:49.828: INFO: Waiting for statefulset status.replicas updated to 0 Dec 28 21:55:49.863: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999667s Dec 28 21:55:50.892: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.981008995s Dec 28 21:55:51.921: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.952084692s Dec 28 21:55:52.937: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.923211909s Dec 28 21:55:53.960: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.906479926s Dec 28 21:55:54.991: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.883565863s Dec 28 21:55:56.034: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.852891892s Dec 28 21:55:57.042: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.810410333s Dec 28 21:55:58.049: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.802163077s Dec 28 21:55:59.056: INFO: Verifying statefulset ss doesn't scale past 1 for another 795.494543ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9228 Dec 28 21:56:00.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9228 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 28 21:56:00.610: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Dec 28 21:56:00.611: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Dec 28 21:56:00.611: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Dec 28 21:56:00.639: INFO: Found 2 stateful pods, waiting for 3 Dec 28 21:56:10.646: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 28 21:56:10.646: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 28 21:56:10.646: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Dec 28 21:56:20.648: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 28 21:56:20.648: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 28 21:56:20.648: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Dec 28 21:56:20.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9228 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Dec 28 21:56:21.032: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Dec 28 21:56:21.033: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Dec 28 21:56:21.033: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Dec 28 21:56:21.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9228 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Dec 28 21:56:21.565: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Dec 28 21:56:21.565: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Dec 28 21:56:21.565: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Dec 28 21:56:21.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9228 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Dec 28 21:56:22.039: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Dec 28 21:56:22.039: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Dec 28 21:56:22.039: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Dec 28 21:56:22.039: INFO: Waiting for statefulset status.replicas updated to 0 Dec 28 21:56:22.105: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Dec 28 21:56:32.117: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 28 21:56:32.117: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Dec 28 21:56:32.117: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Dec 28 21:56:32.150: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.99999878s Dec 28 21:56:33.157: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.976221892s Dec 28 21:56:34.164: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.96950909s Dec 28 21:56:35.173: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.962558622s Dec 28 21:56:36.533: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.953683466s Dec 28 21:56:37.544: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.593031099s Dec 28 21:56:38.560: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.581674483s Dec 28 21:56:39.572: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.565767653s Dec 28 21:56:40.587: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.554627517s Dec 28 21:56:41.595: INFO: Verifying statefulset ss doesn't scale past 3 for another 539.272796ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9228 Dec 28 21:56:42.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9228 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 28 21:56:43.008: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Dec 28 21:56:43.008: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Dec 28 21:56:43.008: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Dec 28 21:56:43.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9228 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 28 21:56:43.479: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Dec 28 21:56:43.479: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Dec 28 21:56:43.479: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Dec 28 21:56:43.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9228 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 28 21:56:43.813: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Dec 28 21:56:43.814: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Dec 28 21:56:43.814: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Dec 28 21:56:43.814: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Dec 28 21:57:13.865: INFO: Deleting all statefulset in ns statefulset-9228 Dec 28 21:57:13.875: INFO: Scaling statefulset ss to 0 Dec 28 21:57:13.903: INFO: Waiting for statefulset status.replicas updated to 0 Dec 28 21:57:13.910: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:57:13.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9228" for this suite. • [SLOW TEST:114.959 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":117,"skipped":1779,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:57:14.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-ca086f59-bf4c-48ea-9ae5-08f79ecc8e04 STEP: Creating a pod to test consume configMaps Dec 28 21:57:14.227: INFO: Waiting up to 5m0s for pod "pod-configmaps-4a16e0f9-eb84-4822-bd4d-a0fdafdd807b" in namespace "configmap-2326" to be "success or failure" Dec 28 21:57:14.243: INFO: Pod "pod-configmaps-4a16e0f9-eb84-4822-bd4d-a0fdafdd807b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.301085ms Dec 28 21:57:16.250: INFO: Pod "pod-configmaps-4a16e0f9-eb84-4822-bd4d-a0fdafdd807b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022896958s Dec 28 21:57:18.260: INFO: Pod "pod-configmaps-4a16e0f9-eb84-4822-bd4d-a0fdafdd807b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03222939s Dec 28 21:57:20.271: INFO: Pod "pod-configmaps-4a16e0f9-eb84-4822-bd4d-a0fdafdd807b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043161326s Dec 28 21:57:23.037: INFO: Pod "pod-configmaps-4a16e0f9-eb84-4822-bd4d-a0fdafdd807b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.809687218s STEP: Saw pod success Dec 28 21:57:23.038: INFO: Pod "pod-configmaps-4a16e0f9-eb84-4822-bd4d-a0fdafdd807b" satisfied condition "success or failure" Dec 28 21:57:23.044: INFO: Trying to get logs from node jerma-node pod pod-configmaps-4a16e0f9-eb84-4822-bd4d-a0fdafdd807b container configmap-volume-test: STEP: delete the pod Dec 28 21:57:23.113: INFO: Waiting for pod pod-configmaps-4a16e0f9-eb84-4822-bd4d-a0fdafdd807b to disappear Dec 28 21:57:23.124: INFO: Pod pod-configmaps-4a16e0f9-eb84-4822-bd4d-a0fdafdd807b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:57:23.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2326" for this suite. • [SLOW TEST:9.163 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":118,"skipped":1780,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:57:23.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all Dec 28 21:57:23.283: INFO: Waiting up to 5m0s for pod "client-containers-9a6ab7bd-2f61-44b1-aeb4-ce8dd580947d" in namespace "containers-5081" to be "success or failure" Dec 28 21:57:23.343: INFO: Pod "client-containers-9a6ab7bd-2f61-44b1-aeb4-ce8dd580947d": Phase="Pending", Reason="", readiness=false. Elapsed: 60.050744ms Dec 28 21:57:25.350: INFO: Pod "client-containers-9a6ab7bd-2f61-44b1-aeb4-ce8dd580947d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067193457s Dec 28 21:57:27.357: INFO: Pod "client-containers-9a6ab7bd-2f61-44b1-aeb4-ce8dd580947d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074059083s Dec 28 21:57:29.389: INFO: Pod "client-containers-9a6ab7bd-2f61-44b1-aeb4-ce8dd580947d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.105470534s Dec 28 21:57:31.396: INFO: Pod "client-containers-9a6ab7bd-2f61-44b1-aeb4-ce8dd580947d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.113261376s STEP: Saw pod success Dec 28 21:57:31.397: INFO: Pod "client-containers-9a6ab7bd-2f61-44b1-aeb4-ce8dd580947d" satisfied condition "success or failure" Dec 28 21:57:31.401: INFO: Trying to get logs from node jerma-node pod client-containers-9a6ab7bd-2f61-44b1-aeb4-ce8dd580947d container test-container: STEP: delete the pod Dec 28 21:57:31.436: INFO: Waiting for pod client-containers-9a6ab7bd-2f61-44b1-aeb4-ce8dd580947d to disappear Dec 28 21:57:31.464: INFO: Pod client-containers-9a6ab7bd-2f61-44b1-aeb4-ce8dd580947d no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:57:31.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5081" for this suite. • [SLOW TEST:8.278 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":119,"skipped":1792,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:57:31.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Dec 28 21:57:31.711: INFO: Waiting up to 5m0s for pod "pod-e752b358-1b33-43f8-87c5-8fd2a90683ad" in namespace "emptydir-319" to be "success or failure" Dec 28 21:57:31.736: INFO: Pod "pod-e752b358-1b33-43f8-87c5-8fd2a90683ad": Phase="Pending", Reason="", readiness=false. Elapsed: 23.975421ms Dec 28 21:57:33.744: INFO: Pod "pod-e752b358-1b33-43f8-87c5-8fd2a90683ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032449878s Dec 28 21:57:35.752: INFO: Pod "pod-e752b358-1b33-43f8-87c5-8fd2a90683ad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040155114s Dec 28 21:57:37.771: INFO: Pod "pod-e752b358-1b33-43f8-87c5-8fd2a90683ad": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059752572s Dec 28 21:57:39.797: INFO: Pod "pod-e752b358-1b33-43f8-87c5-8fd2a90683ad": Phase="Pending", Reason="", readiness=false. Elapsed: 8.085418602s Dec 28 21:57:41.814: INFO: Pod "pod-e752b358-1b33-43f8-87c5-8fd2a90683ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.102725374s STEP: Saw pod success Dec 28 21:57:41.815: INFO: Pod "pod-e752b358-1b33-43f8-87c5-8fd2a90683ad" satisfied condition "success or failure" Dec 28 21:57:41.825: INFO: Trying to get logs from node jerma-node pod pod-e752b358-1b33-43f8-87c5-8fd2a90683ad container test-container: STEP: delete the pod Dec 28 21:57:41.941: INFO: Waiting for pod pod-e752b358-1b33-43f8-87c5-8fd2a90683ad to disappear Dec 28 21:57:41.998: INFO: Pod pod-e752b358-1b33-43f8-87c5-8fd2a90683ad no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:57:41.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-319" for this suite. • [SLOW TEST:10.539 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":120,"skipped":1797,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:57:42.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Dec 28 21:57:42.171: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5de43e3c-1a90-4e1f-82c4-451a0eb66c0c" in namespace "downward-api-5800" to be "success or failure" Dec 28 21:57:42.178: INFO: Pod "downwardapi-volume-5de43e3c-1a90-4e1f-82c4-451a0eb66c0c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.524025ms Dec 28 21:57:44.184: INFO: Pod "downwardapi-volume-5de43e3c-1a90-4e1f-82c4-451a0eb66c0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012478634s Dec 28 21:57:46.192: INFO: Pod "downwardapi-volume-5de43e3c-1a90-4e1f-82c4-451a0eb66c0c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020158308s Dec 28 21:57:48.199: INFO: Pod "downwardapi-volume-5de43e3c-1a90-4e1f-82c4-451a0eb66c0c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027723006s Dec 28 21:57:50.208: INFO: Pod "downwardapi-volume-5de43e3c-1a90-4e1f-82c4-451a0eb66c0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.036141547s STEP: Saw pod success Dec 28 21:57:50.208: INFO: Pod "downwardapi-volume-5de43e3c-1a90-4e1f-82c4-451a0eb66c0c" satisfied condition "success or failure" Dec 28 21:57:50.213: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-5de43e3c-1a90-4e1f-82c4-451a0eb66c0c container client-container: STEP: delete the pod Dec 28 21:57:50.244: INFO: Waiting for pod downwardapi-volume-5de43e3c-1a90-4e1f-82c4-451a0eb66c0c to disappear Dec 28 21:57:50.267: INFO: Pod downwardapi-volume-5de43e3c-1a90-4e1f-82c4-451a0eb66c0c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:57:50.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5800" for this suite. • [SLOW TEST:8.255 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":121,"skipped":1802,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:57:50.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-d423c85a-4272-4dc0-998d-ca224644fc88 STEP: Creating a pod to test consume secrets Dec 28 21:57:50.412: INFO: Waiting up to 5m0s for pod "pod-secrets-f8569733-9e34-4871-91ed-6f060f09c96d" in namespace "secrets-8298" to be "success or failure" Dec 28 21:57:50.433: INFO: Pod "pod-secrets-f8569733-9e34-4871-91ed-6f060f09c96d": Phase="Pending", Reason="", readiness=false. Elapsed: 20.824446ms Dec 28 21:57:52.446: INFO: Pod "pod-secrets-f8569733-9e34-4871-91ed-6f060f09c96d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033844674s Dec 28 21:57:54.455: INFO: Pod "pod-secrets-f8569733-9e34-4871-91ed-6f060f09c96d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042871849s Dec 28 21:57:56.466: INFO: Pod "pod-secrets-f8569733-9e34-4871-91ed-6f060f09c96d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053757983s Dec 28 21:57:58.484: INFO: Pod "pod-secrets-f8569733-9e34-4871-91ed-6f060f09c96d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.071371848s STEP: Saw pod success Dec 28 21:57:58.484: INFO: Pod "pod-secrets-f8569733-9e34-4871-91ed-6f060f09c96d" satisfied condition "success or failure" Dec 28 21:57:58.491: INFO: Trying to get logs from node jerma-node pod pod-secrets-f8569733-9e34-4871-91ed-6f060f09c96d container secret-volume-test: STEP: delete the pod Dec 28 21:57:58.563: INFO: Waiting for pod pod-secrets-f8569733-9e34-4871-91ed-6f060f09c96d to disappear Dec 28 21:57:58.572: INFO: Pod pod-secrets-f8569733-9e34-4871-91ed-6f060f09c96d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:57:58.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8298" for this suite. • [SLOW TEST:8.327 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":122,"skipped":1812,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:57:58.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-3b2e59ba-58a3-427c-a613-a9d74da030cf STEP: Creating a pod to test consume secrets Dec 28 21:57:58.826: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e96a3c90-bc39-4572-adf9-bbd8e5e74ad3" in namespace "projected-6946" to be "success or failure" Dec 28 21:57:58.845: INFO: Pod "pod-projected-secrets-e96a3c90-bc39-4572-adf9-bbd8e5e74ad3": Phase="Pending", Reason="", readiness=false. Elapsed: 19.678152ms Dec 28 21:58:00.861: INFO: Pod "pod-projected-secrets-e96a3c90-bc39-4572-adf9-bbd8e5e74ad3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035086153s Dec 28 21:58:02.898: INFO: Pod "pod-projected-secrets-e96a3c90-bc39-4572-adf9-bbd8e5e74ad3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072190907s Dec 28 21:58:04.910: INFO: Pod "pod-projected-secrets-e96a3c90-bc39-4572-adf9-bbd8e5e74ad3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084192752s Dec 28 21:58:06.921: INFO: Pod "pod-projected-secrets-e96a3c90-bc39-4572-adf9-bbd8e5e74ad3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.095256995s STEP: Saw pod success Dec 28 21:58:06.921: INFO: Pod "pod-projected-secrets-e96a3c90-bc39-4572-adf9-bbd8e5e74ad3" satisfied condition "success or failure" Dec 28 21:58:06.926: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-e96a3c90-bc39-4572-adf9-bbd8e5e74ad3 container secret-volume-test: STEP: delete the pod Dec 28 21:58:06.979: INFO: Waiting for pod pod-projected-secrets-e96a3c90-bc39-4572-adf9-bbd8e5e74ad3 to disappear Dec 28 21:58:06.985: INFO: Pod pod-projected-secrets-e96a3c90-bc39-4572-adf9-bbd8e5e74ad3 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:58:06.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6946" for this suite. • [SLOW TEST:8.384 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":1888,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:58:06.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:58:15.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8806" for this suite. • [SLOW TEST:8.187 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":124,"skipped":1897,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:58:15.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Dec 28 21:58:15.293: INFO: Waiting up to 5m0s for pod "pod-96f80cbd-13dd-428d-85e9-161557ac1f4c" in namespace "emptydir-7090" to be "success or failure" Dec 28 21:58:15.307: INFO: Pod "pod-96f80cbd-13dd-428d-85e9-161557ac1f4c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.534184ms Dec 28 21:58:17.316: INFO: Pod "pod-96f80cbd-13dd-428d-85e9-161557ac1f4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023006699s Dec 28 21:58:19.321: INFO: Pod "pod-96f80cbd-13dd-428d-85e9-161557ac1f4c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028062513s Dec 28 21:58:21.337: INFO: Pod "pod-96f80cbd-13dd-428d-85e9-161557ac1f4c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044129747s Dec 28 21:58:23.347: INFO: Pod "pod-96f80cbd-13dd-428d-85e9-161557ac1f4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05428381s STEP: Saw pod success Dec 28 21:58:23.347: INFO: Pod "pod-96f80cbd-13dd-428d-85e9-161557ac1f4c" satisfied condition "success or failure" Dec 28 21:58:23.358: INFO: Trying to get logs from node jerma-node pod pod-96f80cbd-13dd-428d-85e9-161557ac1f4c container test-container: STEP: delete the pod Dec 28 21:58:23.413: INFO: Waiting for pod pod-96f80cbd-13dd-428d-85e9-161557ac1f4c to disappear Dec 28 21:58:23.423: INFO: Pod pod-96f80cbd-13dd-428d-85e9-161557ac1f4c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 21:58:23.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7090" for this suite. • [SLOW TEST:8.255 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":125,"skipped":1905,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 21:58:23.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-878a7ce2-b390-4319-a09a-059c83a24f26 in namespace container-probe-4089 Dec 28 21:58:31.640: INFO: Started pod liveness-878a7ce2-b390-4319-a09a-059c83a24f26 in namespace container-probe-4089 STEP: checking the pod's current state and verifying that restartCount is present Dec 28 21:58:31.645: INFO: Initial restart count of pod liveness-878a7ce2-b390-4319-a09a-059c83a24f26 is 0 Dec 28 21:58:47.721: INFO: Restart count of pod container-probe-4089/liveness-878a7ce2-b390-4319-a09a-059c83a24f26 is now 1 (16.076433807s elapsed) Dec 28 21:59:07.830: INFO: Restart count of pod container-probe-4089/liveness-878a7ce2-b390-4319-a09a-059c83a24f26 is now 2 (36.184994801s elapsed) Dec 28 21:59:27.952: INFO: Restart count of pod container-probe-4089/liveness-878a7ce2-b390-4319-a09a-059c83a24f26 is now 3 (56.307516307s elapsed) Dec 28 21:59:48.045: INFO: Restart count of pod container-probe-4089/liveness-878a7ce2-b390-4319-a09a-059c83a24f26 is now 4 (1m16.399922592s elapsed) Dec 28 22:00:48.456: INFO: Restart count of pod container-probe-4089/liveness-878a7ce2-b390-4319-a09a-059c83a24f26 is now 5 (2m16.811157593s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:00:48.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4089" for this suite. • [SLOW TEST:145.216 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":1911,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:00:48.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Dec 28 22:00:48.955: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5892 /api/v1/namespaces/watch-5892/configmaps/e2e-watch-test-resource-version 3cbfa868-8248-4ceb-8ca7-c683ee0dd990 10432929 0 2019-12-28 22:00:48 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 28 22:00:48.955: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5892 /api/v1/namespaces/watch-5892/configmaps/e2e-watch-test-resource-version 3cbfa868-8248-4ceb-8ca7-c683ee0dd990 10432930 0 2019-12-28 22:00:48 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:00:48.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5892" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":127,"skipped":1962,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:00:48.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Dec 28 22:00:49.131: INFO: namespace kubectl-3067 Dec 28 22:00:49.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3067' Dec 28 22:00:49.637: INFO: stderr: "" Dec 28 22:00:49.637: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Dec 28 22:00:50.650: INFO: Selector matched 1 pods for map[app:agnhost] Dec 28 22:00:50.650: INFO: Found 0 / 1 Dec 28 22:00:51.646: INFO: Selector matched 1 pods for map[app:agnhost] Dec 28 22:00:51.647: INFO: Found 0 / 1 Dec 28 22:00:52.675: INFO: Selector matched 1 pods for map[app:agnhost] Dec 28 22:00:52.676: INFO: Found 0 / 1 Dec 28 22:00:53.654: INFO: Selector matched 1 pods for map[app:agnhost] Dec 28 22:00:53.655: INFO: Found 0 / 1 Dec 28 22:00:54.659: INFO: Selector matched 1 pods for map[app:agnhost] Dec 28 22:00:54.659: INFO: Found 0 / 1 Dec 28 22:00:55.645: INFO: Selector matched 1 pods for map[app:agnhost] Dec 28 22:00:55.645: INFO: Found 0 / 1 Dec 28 22:00:56.643: INFO: Selector matched 1 pods for map[app:agnhost] Dec 28 22:00:56.643: INFO: Found 1 / 1 Dec 28 22:00:56.643: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Dec 28 22:00:56.646: INFO: Selector matched 1 pods for map[app:agnhost] Dec 28 22:00:56.646: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Dec 28 22:00:56.646: INFO: wait on agnhost-master startup in kubectl-3067 Dec 28 22:00:56.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-nfsml agnhost-master --namespace=kubectl-3067' Dec 28 22:00:56.770: INFO: stderr: "" Dec 28 22:00:56.771: INFO: stdout: "Paused\n" STEP: exposing RC Dec 28 22:00:56.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-3067' Dec 28 22:00:57.002: INFO: stderr: "" Dec 28 22:00:57.002: INFO: stdout: "service/rm2 exposed\n" Dec 28 22:00:57.032: INFO: Service rm2 in namespace kubectl-3067 found. STEP: exposing service Dec 28 22:00:59.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-3067' Dec 28 22:00:59.313: INFO: stderr: "" Dec 28 22:00:59.313: INFO: stdout: "service/rm3 exposed\n" Dec 28 22:00:59.331: INFO: Service rm3 in namespace kubectl-3067 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:01:01.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3067" for this suite. • [SLOW TEST:12.445 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":128,"skipped":1965,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:01:01.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 28 22:01:01.545: INFO: Waiting up to 5m0s for pod "busybox-user-65534-2157110c-a530-40a2-9e20-71e19ed2b4be" in namespace "security-context-test-6625" to be "success or failure" Dec 28 22:01:01.569: INFO: Pod "busybox-user-65534-2157110c-a530-40a2-9e20-71e19ed2b4be": Phase="Pending", Reason="", readiness=false. Elapsed: 24.175557ms Dec 28 22:01:03.584: INFO: Pod "busybox-user-65534-2157110c-a530-40a2-9e20-71e19ed2b4be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039300709s Dec 28 22:01:05.591: INFO: Pod "busybox-user-65534-2157110c-a530-40a2-9e20-71e19ed2b4be": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046538758s Dec 28 22:01:07.601: INFO: Pod "busybox-user-65534-2157110c-a530-40a2-9e20-71e19ed2b4be": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056113603s Dec 28 22:01:09.675: INFO: Pod "busybox-user-65534-2157110c-a530-40a2-9e20-71e19ed2b4be": Phase="Pending", Reason="", readiness=false. Elapsed: 8.129866791s Dec 28 22:01:11.682: INFO: Pod "busybox-user-65534-2157110c-a530-40a2-9e20-71e19ed2b4be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.136703199s Dec 28 22:01:11.682: INFO: Pod "busybox-user-65534-2157110c-a530-40a2-9e20-71e19ed2b4be" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:01:11.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6625" for this suite. • [SLOW TEST:10.279 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a container with runAsUser /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:43 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":129,"skipped":1983,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:01:11.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 28 22:01:12.496: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:01:18.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5659" for this suite. • [SLOW TEST:6.759 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":130,"skipped":2012,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:01:18.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Dec 28 22:01:18.604: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4bbdb829-ce6e-46fe-846e-9515322540df" in namespace "projected-9221" to be "success or failure" Dec 28 22:01:18.619: INFO: Pod "downwardapi-volume-4bbdb829-ce6e-46fe-846e-9515322540df": Phase="Pending", Reason="", readiness=false. Elapsed: 15.258571ms Dec 28 22:01:20.632: INFO: Pod "downwardapi-volume-4bbdb829-ce6e-46fe-846e-9515322540df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027990627s Dec 28 22:01:22.649: INFO: Pod "downwardapi-volume-4bbdb829-ce6e-46fe-846e-9515322540df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044638357s Dec 28 22:01:24.656: INFO: Pod "downwardapi-volume-4bbdb829-ce6e-46fe-846e-9515322540df": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051375128s Dec 28 22:01:26.669: INFO: Pod "downwardapi-volume-4bbdb829-ce6e-46fe-846e-9515322540df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06487893s STEP: Saw pod success Dec 28 22:01:26.669: INFO: Pod "downwardapi-volume-4bbdb829-ce6e-46fe-846e-9515322540df" satisfied condition "success or failure" Dec 28 22:01:26.673: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-4bbdb829-ce6e-46fe-846e-9515322540df container client-container: STEP: delete the pod Dec 28 22:01:26.723: INFO: Waiting for pod downwardapi-volume-4bbdb829-ce6e-46fe-846e-9515322540df to disappear Dec 28 22:01:26.751: INFO: Pod downwardapi-volume-4bbdb829-ce6e-46fe-846e-9515322540df no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:01:26.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9221" for this suite. • [SLOW TEST:8.386 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":131,"skipped":2014,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:01:26.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 28 22:01:27.044: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Dec 28 22:01:30.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1250 create -f -' Dec 28 22:01:33.419: INFO: stderr: "" Dec 28 22:01:33.419: INFO: stdout: "e2e-test-crd-publish-openapi-7689-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Dec 28 22:01:33.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1250 delete e2e-test-crd-publish-openapi-7689-crds test-foo' Dec 28 22:01:33.532: INFO: stderr: "" Dec 28 22:01:33.532: INFO: stdout: "e2e-test-crd-publish-openapi-7689-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Dec 28 22:01:33.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1250 apply -f -' Dec 28 22:01:33.955: INFO: stderr: "" Dec 28 22:01:33.956: INFO: stdout: "e2e-test-crd-publish-openapi-7689-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Dec 28 22:01:33.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1250 delete e2e-test-crd-publish-openapi-7689-crds test-foo' Dec 28 22:01:34.102: INFO: stderr: "" Dec 28 22:01:34.102: INFO: stdout: "e2e-test-crd-publish-openapi-7689-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Dec 28 22:01:34.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1250 create -f -' Dec 28 22:01:34.431: INFO: rc: 1 Dec 28 22:01:34.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1250 apply -f -' Dec 28 22:01:34.773: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Dec 28 22:01:34.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1250 create -f -' Dec 28 22:01:35.183: INFO: rc: 1 Dec 28 22:01:35.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1250 apply -f -' Dec 28 22:01:35.501: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Dec 28 22:01:35.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7689-crds' Dec 28 22:01:35.768: INFO: stderr: "" Dec 28 22:01:35.768: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7689-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Dec 28 22:01:35.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7689-crds.metadata' Dec 28 22:01:36.161: INFO: stderr: "" Dec 28 22:01:36.161: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7689-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Dec 28 22:01:36.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7689-crds.spec' Dec 28 22:01:36.591: INFO: stderr: "" Dec 28 22:01:36.591: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7689-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Dec 28 22:01:36.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7689-crds.spec.bars' Dec 28 22:01:37.057: INFO: stderr: "" Dec 28 22:01:37.057: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7689-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Dec 28 22:01:37.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7689-crds.spec.bars2' Dec 28 22:01:37.541: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:01:41.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1250" for this suite. • [SLOW TEST:14.207 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":132,"skipped":2053,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:01:41.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Dec 28 22:01:41.235: INFO: Waiting up to 5m0s for pod "downwardapi-volume-79537f7d-af02-43d2-88c4-f8b97746a789" in namespace "projected-608" to be "success or failure" Dec 28 22:01:41.243: INFO: Pod "downwardapi-volume-79537f7d-af02-43d2-88c4-f8b97746a789": Phase="Pending", Reason="", readiness=false. Elapsed: 7.473773ms Dec 28 22:01:43.458: INFO: Pod "downwardapi-volume-79537f7d-af02-43d2-88c4-f8b97746a789": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222776003s Dec 28 22:01:45.467: INFO: Pod "downwardapi-volume-79537f7d-af02-43d2-88c4-f8b97746a789": Phase="Pending", Reason="", readiness=false. Elapsed: 4.231921024s Dec 28 22:01:47.483: INFO: Pod "downwardapi-volume-79537f7d-af02-43d2-88c4-f8b97746a789": Phase="Pending", Reason="", readiness=false. Elapsed: 6.248192842s Dec 28 22:01:49.493: INFO: Pod "downwardapi-volume-79537f7d-af02-43d2-88c4-f8b97746a789": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.257442915s STEP: Saw pod success Dec 28 22:01:49.493: INFO: Pod "downwardapi-volume-79537f7d-af02-43d2-88c4-f8b97746a789" satisfied condition "success or failure" Dec 28 22:01:49.498: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-79537f7d-af02-43d2-88c4-f8b97746a789 container client-container: STEP: delete the pod Dec 28 22:01:49.581: INFO: Waiting for pod downwardapi-volume-79537f7d-af02-43d2-88c4-f8b97746a789 to disappear Dec 28 22:01:49.588: INFO: Pod downwardapi-volume-79537f7d-af02-43d2-88c4-f8b97746a789 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:01:49.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-608" for this suite. • [SLOW TEST:8.556 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":133,"skipped":2061,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:01:49.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Dec 28 22:01:49.792: INFO: Waiting up to 5m0s for pod "pod-dcf63368-6632-44cf-bbfd-581271e3564c" in namespace "emptydir-9137" to be "success or failure" Dec 28 22:01:49.819: INFO: Pod "pod-dcf63368-6632-44cf-bbfd-581271e3564c": Phase="Pending", Reason="", readiness=false. Elapsed: 26.828559ms Dec 28 22:01:51.831: INFO: Pod "pod-dcf63368-6632-44cf-bbfd-581271e3564c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039317477s Dec 28 22:01:53.846: INFO: Pod "pod-dcf63368-6632-44cf-bbfd-581271e3564c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05448605s Dec 28 22:01:55.865: INFO: Pod "pod-dcf63368-6632-44cf-bbfd-581271e3564c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073528066s Dec 28 22:01:57.877: INFO: Pod "pod-dcf63368-6632-44cf-bbfd-581271e3564c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.084851397s STEP: Saw pod success Dec 28 22:01:57.877: INFO: Pod "pod-dcf63368-6632-44cf-bbfd-581271e3564c" satisfied condition "success or failure" Dec 28 22:01:57.883: INFO: Trying to get logs from node jerma-node pod pod-dcf63368-6632-44cf-bbfd-581271e3564c container test-container: STEP: delete the pod Dec 28 22:01:57.931: INFO: Waiting for pod pod-dcf63368-6632-44cf-bbfd-581271e3564c to disappear Dec 28 22:01:58.020: INFO: Pod pod-dcf63368-6632-44cf-bbfd-581271e3564c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:01:58.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9137" for this suite. • [SLOW TEST:8.440 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":134,"skipped":2066,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:01:58.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 28 22:01:58.117: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:01:58.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9456" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":135,"skipped":2080,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:01:58.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Dec 28 22:01:59.084: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:02:11.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1699" for this suite. • [SLOW TEST:12.304 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":136,"skipped":2099,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:02:11.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Dec 28 22:02:11.328: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7bb29e0d-6796-4f7a-a956-4e6c0c3ce08b" in namespace "projected-2407" to be "success or failure" Dec 28 22:02:11.372: INFO: Pod "downwardapi-volume-7bb29e0d-6796-4f7a-a956-4e6c0c3ce08b": Phase="Pending", Reason="", readiness=false. Elapsed: 43.534497ms Dec 28 22:02:13.381: INFO: Pod "downwardapi-volume-7bb29e0d-6796-4f7a-a956-4e6c0c3ce08b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053057746s Dec 28 22:02:15.390: INFO: Pod "downwardapi-volume-7bb29e0d-6796-4f7a-a956-4e6c0c3ce08b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061225526s Dec 28 22:02:17.404: INFO: Pod "downwardapi-volume-7bb29e0d-6796-4f7a-a956-4e6c0c3ce08b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075645608s Dec 28 22:02:19.438: INFO: Pod "downwardapi-volume-7bb29e0d-6796-4f7a-a956-4e6c0c3ce08b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.109963666s Dec 28 22:02:21.447: INFO: Pod "downwardapi-volume-7bb29e0d-6796-4f7a-a956-4e6c0c3ce08b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.1183925s STEP: Saw pod success Dec 28 22:02:21.447: INFO: Pod "downwardapi-volume-7bb29e0d-6796-4f7a-a956-4e6c0c3ce08b" satisfied condition "success or failure" Dec 28 22:02:21.452: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-7bb29e0d-6796-4f7a-a956-4e6c0c3ce08b container client-container: STEP: delete the pod Dec 28 22:02:21.503: INFO: Waiting for pod downwardapi-volume-7bb29e0d-6796-4f7a-a956-4e6c0c3ce08b to disappear Dec 28 22:02:21.512: INFO: Pod downwardapi-volume-7bb29e0d-6796-4f7a-a956-4e6c0c3ce08b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:02:21.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2407" for this suite. • [SLOW TEST:10.358 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":137,"skipped":2109,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:02:21.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:02:29.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4048" for this suite. • [SLOW TEST:8.342 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":138,"skipped":2124,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:02:29.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-c3f0477b-227c-4123-bb33-1eeb8158a546 STEP: Creating a pod to test consume secrets Dec 28 22:02:30.020: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-15d091a6-f71e-489f-85e2-3a332ea6a0dd" in namespace "projected-1037" to be "success or failure" Dec 28 22:02:30.043: INFO: Pod "pod-projected-secrets-15d091a6-f71e-489f-85e2-3a332ea6a0dd": Phase="Pending", Reason="", readiness=false. Elapsed: 23.253022ms Dec 28 22:02:32.054: INFO: Pod "pod-projected-secrets-15d091a6-f71e-489f-85e2-3a332ea6a0dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03452993s Dec 28 22:02:34.063: INFO: Pod "pod-projected-secrets-15d091a6-f71e-489f-85e2-3a332ea6a0dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042755908s Dec 28 22:02:36.070: INFO: Pod "pod-projected-secrets-15d091a6-f71e-489f-85e2-3a332ea6a0dd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050023238s Dec 28 22:02:38.077: INFO: Pod "pod-projected-secrets-15d091a6-f71e-489f-85e2-3a332ea6a0dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.056614305s STEP: Saw pod success Dec 28 22:02:38.077: INFO: Pod "pod-projected-secrets-15d091a6-f71e-489f-85e2-3a332ea6a0dd" satisfied condition "success or failure" Dec 28 22:02:38.080: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-15d091a6-f71e-489f-85e2-3a332ea6a0dd container projected-secret-volume-test: STEP: delete the pod Dec 28 22:02:38.119: INFO: Waiting for pod pod-projected-secrets-15d091a6-f71e-489f-85e2-3a332ea6a0dd to disappear Dec 28 22:02:38.147: INFO: Pod pod-projected-secrets-15d091a6-f71e-489f-85e2-3a332ea6a0dd no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:02:38.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1037" for this suite. • [SLOW TEST:8.242 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":139,"skipped":2125,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:02:38.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:02:47.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6195" for this suite. • [SLOW TEST:9.231 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":140,"skipped":2135,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:02:47.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-7d22eccf-2b07-4fac-95d8-89d7746b8e01 STEP: Creating a pod to test consume configMaps Dec 28 22:02:47.500: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-af679f31-70cb-42fa-8bae-11e30f518f07" in namespace "projected-776" to be "success or failure" Dec 28 22:02:47.520: INFO: Pod "pod-projected-configmaps-af679f31-70cb-42fa-8bae-11e30f518f07": Phase="Pending", Reason="", readiness=false. Elapsed: 20.054348ms Dec 28 22:02:49.528: INFO: Pod "pod-projected-configmaps-af679f31-70cb-42fa-8bae-11e30f518f07": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028249623s Dec 28 22:02:51.536: INFO: Pod "pod-projected-configmaps-af679f31-70cb-42fa-8bae-11e30f518f07": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03612827s Dec 28 22:02:54.197: INFO: Pod "pod-projected-configmaps-af679f31-70cb-42fa-8bae-11e30f518f07": Phase="Pending", Reason="", readiness=false. Elapsed: 6.697317186s Dec 28 22:02:56.505: INFO: Pod "pod-projected-configmaps-af679f31-70cb-42fa-8bae-11e30f518f07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.004572721s STEP: Saw pod success Dec 28 22:02:56.505: INFO: Pod "pod-projected-configmaps-af679f31-70cb-42fa-8bae-11e30f518f07" satisfied condition "success or failure" Dec 28 22:02:56.516: INFO: Trying to get logs from node jerma-server-4b75xjbddvit pod pod-projected-configmaps-af679f31-70cb-42fa-8bae-11e30f518f07 container projected-configmap-volume-test: STEP: delete the pod Dec 28 22:02:57.008: INFO: Waiting for pod pod-projected-configmaps-af679f31-70cb-42fa-8bae-11e30f518f07 to disappear Dec 28 22:02:57.016: INFO: Pod pod-projected-configmaps-af679f31-70cb-42fa-8bae-11e30f518f07 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:02:57.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-776" for this suite. • [SLOW TEST:9.663 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2141,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:02:57.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Dec 28 22:03:15.342: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 28 22:03:15.373: INFO: Pod pod-with-poststart-exec-hook still exists Dec 28 22:03:17.374: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 28 22:03:17.383: INFO: Pod pod-with-poststart-exec-hook still exists Dec 28 22:03:19.374: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 28 22:03:19.481: INFO: Pod pod-with-poststart-exec-hook still exists Dec 28 22:03:21.374: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 28 22:03:21.388: INFO: Pod pod-with-poststart-exec-hook still exists Dec 28 22:03:23.374: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 28 22:03:23.393: INFO: Pod pod-with-poststart-exec-hook still exists Dec 28 22:03:25.374: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 28 22:03:25.383: INFO: Pod pod-with-poststart-exec-hook still exists Dec 28 22:03:27.374: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 28 22:03:27.385: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:03:27.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3520" for this suite. • [SLOW TEST:30.350 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":142,"skipped":2174,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:03:27.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:03:32.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1214" for this suite. • [SLOW TEST:5.273 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":143,"skipped":2205,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:03:32.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Dec 28 22:03:32.867: INFO: Waiting up to 5m0s for pod "var-expansion-6350e9c4-f614-4f23-ab1f-e9624a5d9ece" in namespace "var-expansion-7015" to be "success or failure" Dec 28 22:03:32.909: INFO: Pod "var-expansion-6350e9c4-f614-4f23-ab1f-e9624a5d9ece": Phase="Pending", Reason="", readiness=false. Elapsed: 41.232246ms Dec 28 22:03:34.916: INFO: Pod "var-expansion-6350e9c4-f614-4f23-ab1f-e9624a5d9ece": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048098579s Dec 28 22:03:37.115: INFO: Pod "var-expansion-6350e9c4-f614-4f23-ab1f-e9624a5d9ece": Phase="Pending", Reason="", readiness=false. Elapsed: 4.247272058s Dec 28 22:03:39.128: INFO: Pod "var-expansion-6350e9c4-f614-4f23-ab1f-e9624a5d9ece": Phase="Pending", Reason="", readiness=false. Elapsed: 6.260291064s Dec 28 22:03:41.142: INFO: Pod "var-expansion-6350e9c4-f614-4f23-ab1f-e9624a5d9ece": Phase="Pending", Reason="", readiness=false. Elapsed: 8.273868403s Dec 28 22:03:43.152: INFO: Pod "var-expansion-6350e9c4-f614-4f23-ab1f-e9624a5d9ece": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.284101441s STEP: Saw pod success Dec 28 22:03:43.152: INFO: Pod "var-expansion-6350e9c4-f614-4f23-ab1f-e9624a5d9ece" satisfied condition "success or failure" Dec 28 22:03:43.157: INFO: Trying to get logs from node jerma-node pod var-expansion-6350e9c4-f614-4f23-ab1f-e9624a5d9ece container dapi-container: STEP: delete the pod Dec 28 22:03:43.241: INFO: Waiting for pod var-expansion-6350e9c4-f614-4f23-ab1f-e9624a5d9ece to disappear Dec 28 22:03:43.248: INFO: Pod var-expansion-6350e9c4-f614-4f23-ab1f-e9624a5d9ece no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:03:43.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7015" for this suite. • [SLOW TEST:10.595 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":144,"skipped":2216,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:03:43.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Dec 28 22:03:43.402: INFO: Waiting up to 5m0s for pod "pod-1bc682cc-c8a8-477d-834d-12762bfcb433" in namespace "emptydir-1198" to be "success or failure" Dec 28 22:03:43.417: INFO: Pod "pod-1bc682cc-c8a8-477d-834d-12762bfcb433": Phase="Pending", Reason="", readiness=false. Elapsed: 14.906582ms Dec 28 22:03:45.430: INFO: Pod "pod-1bc682cc-c8a8-477d-834d-12762bfcb433": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027468493s Dec 28 22:03:47.443: INFO: Pod "pod-1bc682cc-c8a8-477d-834d-12762bfcb433": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040870341s Dec 28 22:03:49.454: INFO: Pod "pod-1bc682cc-c8a8-477d-834d-12762bfcb433": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051849517s Dec 28 22:03:51.479: INFO: Pod "pod-1bc682cc-c8a8-477d-834d-12762bfcb433": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.076555226s STEP: Saw pod success Dec 28 22:03:51.479: INFO: Pod "pod-1bc682cc-c8a8-477d-834d-12762bfcb433" satisfied condition "success or failure" Dec 28 22:03:51.483: INFO: Trying to get logs from node jerma-node pod pod-1bc682cc-c8a8-477d-834d-12762bfcb433 container test-container: STEP: delete the pod Dec 28 22:03:51.541: INFO: Waiting for pod pod-1bc682cc-c8a8-477d-834d-12762bfcb433 to disappear Dec 28 22:03:51.559: INFO: Pod pod-1bc682cc-c8a8-477d-834d-12762bfcb433 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:03:51.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1198" for this suite. • [SLOW TEST:8.295 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2221,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:03:51.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 28 22:03:51.723: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-5597 I1228 22:03:51.815938 8 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-5597, replica count: 1 I1228 22:03:52.866840 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1228 22:03:53.867440 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1228 22:03:54.868353 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1228 22:03:55.869352 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1228 22:03:56.869922 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1228 22:03:57.871406 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1228 22:03:58.872247 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1228 22:03:59.872793 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 28 22:04:00.014: INFO: Created: latency-svc-5z29b Dec 28 22:04:00.023: INFO: Got endpoints: latency-svc-5z29b [50.339838ms] Dec 28 22:04:00.108: INFO: Created: latency-svc-ncn6q Dec 28 22:04:00.117: INFO: Got endpoints: latency-svc-ncn6q [92.975193ms] Dec 28 22:04:00.142: INFO: Created: latency-svc-m97zd Dec 28 22:04:00.151: INFO: Got endpoints: latency-svc-m97zd [127.693258ms] Dec 28 22:04:00.251: INFO: Created: latency-svc-ngmn9 Dec 28 22:04:00.258: INFO: Got endpoints: latency-svc-ngmn9 [235.012583ms] Dec 28 22:04:00.306: INFO: Created: latency-svc-jv8tq Dec 28 22:04:00.349: INFO: Got endpoints: latency-svc-jv8tq [325.874925ms] Dec 28 22:04:00.474: INFO: Created: latency-svc-g59tn Dec 28 22:04:00.484: INFO: Got endpoints: latency-svc-g59tn [460.605325ms] Dec 28 22:04:00.532: INFO: Created: latency-svc-t2d9l Dec 28 22:04:00.537: INFO: Got endpoints: latency-svc-t2d9l [513.543317ms] Dec 28 22:04:00.574: INFO: Created: latency-svc-tkvnr Dec 28 22:04:00.657: INFO: Got endpoints: latency-svc-tkvnr [632.133493ms] Dec 28 22:04:00.669: INFO: Created: latency-svc-n6xjp Dec 28 22:04:00.677: INFO: Got endpoints: latency-svc-n6xjp [653.516469ms] Dec 28 22:04:00.716: INFO: Created: latency-svc-rtb5p Dec 28 22:04:00.733: INFO: Got endpoints: latency-svc-rtb5p [708.329119ms] Dec 28 22:04:00.863: INFO: Created: latency-svc-hk7nk Dec 28 22:04:00.871: INFO: Got endpoints: latency-svc-hk7nk [846.328354ms] Dec 28 22:04:00.915: INFO: Created: latency-svc-slwfp Dec 28 22:04:00.941: INFO: Got endpoints: latency-svc-slwfp [916.314155ms] Dec 28 22:04:01.007: INFO: Created: latency-svc-bvqgq Dec 28 22:04:01.013: INFO: Got endpoints: latency-svc-bvqgq [988.212942ms] Dec 28 22:04:01.047: INFO: Created: latency-svc-75hh5 Dec 28 22:04:01.051: INFO: Got endpoints: latency-svc-75hh5 [1.026275929s] Dec 28 22:04:01.093: INFO: Created: latency-svc-zgxq5 Dec 28 22:04:01.106: INFO: Got endpoints: latency-svc-zgxq5 [1.08179015s] Dec 28 22:04:01.230: INFO: Created: latency-svc-kgbz6 Dec 28 22:04:01.276: INFO: Got endpoints: latency-svc-kgbz6 [1.252035723s] Dec 28 22:04:01.284: INFO: Created: latency-svc-jc75l Dec 28 22:04:01.309: INFO: Got endpoints: latency-svc-jc75l [1.192721672s] Dec 28 22:04:01.408: INFO: Created: latency-svc-mllmb Dec 28 22:04:01.411: INFO: Got endpoints: latency-svc-mllmb [1.258978451s] Dec 28 22:04:01.443: INFO: Created: latency-svc-5mccl Dec 28 22:04:01.448: INFO: Got endpoints: latency-svc-5mccl [1.188840699s] Dec 28 22:04:01.482: INFO: Created: latency-svc-7b8q4 Dec 28 22:04:01.485: INFO: Got endpoints: latency-svc-7b8q4 [1.135284666s] Dec 28 22:04:01.561: INFO: Created: latency-svc-pdtqv Dec 28 22:04:01.565: INFO: Got endpoints: latency-svc-pdtqv [1.08023826s] Dec 28 22:04:01.608: INFO: Created: latency-svc-g42qm Dec 28 22:04:01.623: INFO: Got endpoints: latency-svc-g42qm [1.085717888s] Dec 28 22:04:01.841: INFO: Created: latency-svc-kgrz4 Dec 28 22:04:01.844: INFO: Got endpoints: latency-svc-kgrz4 [1.187051194s] Dec 28 22:04:02.031: INFO: Created: latency-svc-lh94q Dec 28 22:04:02.103: INFO: Created: latency-svc-lwkfp Dec 28 22:04:02.103: INFO: Got endpoints: latency-svc-lh94q [1.426121659s] Dec 28 22:04:02.112: INFO: Got endpoints: latency-svc-lwkfp [1.378920471s] Dec 28 22:04:02.244: INFO: Created: latency-svc-snzp5 Dec 28 22:04:02.251: INFO: Got endpoints: latency-svc-snzp5 [1.37976794s] Dec 28 22:04:02.300: INFO: Created: latency-svc-wdq6v Dec 28 22:04:02.304: INFO: Got endpoints: latency-svc-wdq6v [1.362855898s] Dec 28 22:04:02.443: INFO: Created: latency-svc-kqstb Dec 28 22:04:02.446: INFO: Got endpoints: latency-svc-kqstb [1.432926544s] Dec 28 22:04:02.492: INFO: Created: latency-svc-xv989 Dec 28 22:04:02.495: INFO: Got endpoints: latency-svc-xv989 [1.443859726s] Dec 28 22:04:02.541: INFO: Created: latency-svc-lvg5h Dec 28 22:04:02.680: INFO: Got endpoints: latency-svc-lvg5h [1.573799998s] Dec 28 22:04:02.689: INFO: Created: latency-svc-56qc2 Dec 28 22:04:02.693: INFO: Got endpoints: latency-svc-56qc2 [1.416263792s] Dec 28 22:04:02.747: INFO: Created: latency-svc-rwkds Dec 28 22:04:02.764: INFO: Got endpoints: latency-svc-rwkds [1.454936121s] Dec 28 22:04:02.874: INFO: Created: latency-svc-vpgkq Dec 28 22:04:02.874: INFO: Got endpoints: latency-svc-vpgkq [1.463697278s] Dec 28 22:04:02.934: INFO: Created: latency-svc-gfnb8 Dec 28 22:04:02.936: INFO: Got endpoints: latency-svc-gfnb8 [1.48848681s] Dec 28 22:04:03.035: INFO: Created: latency-svc-dswcp Dec 28 22:04:03.035: INFO: Got endpoints: latency-svc-dswcp [1.55030208s] Dec 28 22:04:03.066: INFO: Created: latency-svc-drqhz Dec 28 22:04:03.066: INFO: Got endpoints: latency-svc-drqhz [1.500847896s] Dec 28 22:04:03.096: INFO: Created: latency-svc-vmkff Dec 28 22:04:03.105: INFO: Got endpoints: latency-svc-vmkff [1.481779314s] Dec 28 22:04:03.219: INFO: Created: latency-svc-tpq2z Dec 28 22:04:03.246: INFO: Got endpoints: latency-svc-tpq2z [1.402292922s] Dec 28 22:04:03.284: INFO: Created: latency-svc-cntnd Dec 28 22:04:03.356: INFO: Got endpoints: latency-svc-cntnd [1.252717916s] Dec 28 22:04:03.378: INFO: Created: latency-svc-k2f6q Dec 28 22:04:03.383: INFO: Got endpoints: latency-svc-k2f6q [1.270787204s] Dec 28 22:04:03.414: INFO: Created: latency-svc-xqgfn Dec 28 22:04:03.434: INFO: Got endpoints: latency-svc-xqgfn [1.182687046s] Dec 28 22:04:03.525: INFO: Created: latency-svc-htsjn Dec 28 22:04:03.534: INFO: Got endpoints: latency-svc-htsjn [1.229837345s] Dec 28 22:04:03.682: INFO: Created: latency-svc-cm7p9 Dec 28 22:04:03.696: INFO: Got endpoints: latency-svc-cm7p9 [1.249910306s] Dec 28 22:04:03.845: INFO: Created: latency-svc-hbf2h Dec 28 22:04:03.890: INFO: Got endpoints: latency-svc-hbf2h [1.395074847s] Dec 28 22:04:03.897: INFO: Created: latency-svc-rsf75 Dec 28 22:04:03.904: INFO: Got endpoints: latency-svc-rsf75 [1.223435335s] Dec 28 22:04:04.076: INFO: Created: latency-svc-2pknw Dec 28 22:04:04.076: INFO: Got endpoints: latency-svc-2pknw [1.383541581s] Dec 28 22:04:04.138: INFO: Created: latency-svc-l2nhf Dec 28 22:04:04.139: INFO: Got endpoints: latency-svc-l2nhf [1.373548547s] Dec 28 22:04:04.210: INFO: Created: latency-svc-tf9sf Dec 28 22:04:04.213: INFO: Got endpoints: latency-svc-tf9sf [1.338814227s] Dec 28 22:04:04.264: INFO: Created: latency-svc-mjpd5 Dec 28 22:04:04.274: INFO: Got endpoints: latency-svc-mjpd5 [1.337301341s] Dec 28 22:04:04.313: INFO: Created: latency-svc-zxlvr Dec 28 22:04:04.386: INFO: Got endpoints: latency-svc-zxlvr [1.350715973s] Dec 28 22:04:04.410: INFO: Created: latency-svc-gwm2k Dec 28 22:04:04.429: INFO: Got endpoints: latency-svc-gwm2k [1.363173177s] Dec 28 22:04:04.460: INFO: Created: latency-svc-cprm5 Dec 28 22:04:04.461: INFO: Got endpoints: latency-svc-cprm5 [1.356195834s] Dec 28 22:04:04.558: INFO: Created: latency-svc-gpqt5 Dec 28 22:04:04.572: INFO: Got endpoints: latency-svc-gpqt5 [1.324887123s] Dec 28 22:04:04.609: INFO: Created: latency-svc-fjj22 Dec 28 22:04:04.627: INFO: Got endpoints: latency-svc-fjj22 [1.270710944s] Dec 28 22:04:04.757: INFO: Created: latency-svc-2gqgm Dec 28 22:04:04.758: INFO: Got endpoints: latency-svc-2gqgm [1.375070871s] Dec 28 22:04:04.816: INFO: Created: latency-svc-4crm7 Dec 28 22:04:04.826: INFO: Got endpoints: latency-svc-4crm7 [1.390918306s] Dec 28 22:04:04.923: INFO: Created: latency-svc-sktfz Dec 28 22:04:04.954: INFO: Got endpoints: latency-svc-sktfz [1.419359605s] Dec 28 22:04:04.956: INFO: Created: latency-svc-l6nhj Dec 28 22:04:04.967: INFO: Got endpoints: latency-svc-l6nhj [1.27008424s] Dec 28 22:04:05.077: INFO: Created: latency-svc-psx2h Dec 28 22:04:05.084: INFO: Got endpoints: latency-svc-psx2h [1.193046358s] Dec 28 22:04:05.120: INFO: Created: latency-svc-kc57d Dec 28 22:04:05.123: INFO: Got endpoints: latency-svc-kc57d [1.219076045s] Dec 28 22:04:05.169: INFO: Created: latency-svc-4pz9v Dec 28 22:04:05.173: INFO: Got endpoints: latency-svc-4pz9v [1.096114073s] Dec 28 22:04:05.237: INFO: Created: latency-svc-8kt4z Dec 28 22:04:05.245: INFO: Got endpoints: latency-svc-8kt4z [1.105898271s] Dec 28 22:04:05.280: INFO: Created: latency-svc-v2df5 Dec 28 22:04:05.293: INFO: Got endpoints: latency-svc-v2df5 [1.079804587s] Dec 28 22:04:05.320: INFO: Created: latency-svc-7gpww Dec 28 22:04:05.407: INFO: Got endpoints: latency-svc-7gpww [1.132780353s] Dec 28 22:04:05.422: INFO: Created: latency-svc-c42lj Dec 28 22:04:05.440: INFO: Got endpoints: latency-svc-c42lj [1.053927471s] Dec 28 22:04:05.468: INFO: Created: latency-svc-ftkzs Dec 28 22:04:05.476: INFO: Got endpoints: latency-svc-ftkzs [1.045362449s] Dec 28 22:04:05.503: INFO: Created: latency-svc-rvhkg Dec 28 22:04:05.574: INFO: Got endpoints: latency-svc-rvhkg [1.113068485s] Dec 28 22:04:05.580: INFO: Created: latency-svc-qhpnn Dec 28 22:04:05.594: INFO: Got endpoints: latency-svc-qhpnn [1.021834152s] Dec 28 22:04:05.626: INFO: Created: latency-svc-nrjg2 Dec 28 22:04:05.634: INFO: Got endpoints: latency-svc-nrjg2 [1.005628444s] Dec 28 22:04:05.805: INFO: Created: latency-svc-z88dr Dec 28 22:04:05.817: INFO: Got endpoints: latency-svc-z88dr [1.058714914s] Dec 28 22:04:05.888: INFO: Created: latency-svc-j88dh Dec 28 22:04:06.024: INFO: Got endpoints: latency-svc-j88dh [1.197562793s] Dec 28 22:04:06.026: INFO: Created: latency-svc-rrt8w Dec 28 22:04:06.070: INFO: Got endpoints: latency-svc-rrt8w [1.116152755s] Dec 28 22:04:06.253: INFO: Created: latency-svc-2lnsm Dec 28 22:04:06.257: INFO: Got endpoints: latency-svc-2lnsm [1.290514248s] Dec 28 22:04:06.328: INFO: Created: latency-svc-5pjgs Dec 28 22:04:06.339: INFO: Got endpoints: latency-svc-5pjgs [267.906705ms] Dec 28 22:04:06.569: INFO: Created: latency-svc-fxvz9 Dec 28 22:04:06.583: INFO: Got endpoints: latency-svc-fxvz9 [1.499222191s] Dec 28 22:04:06.645: INFO: Created: latency-svc-wfkwd Dec 28 22:04:06.651: INFO: Got endpoints: latency-svc-wfkwd [1.527591529s] Dec 28 22:04:06.968: INFO: Created: latency-svc-dwrhf Dec 28 22:04:06.974: INFO: Got endpoints: latency-svc-dwrhf [1.801617859s] Dec 28 22:04:07.062: INFO: Created: latency-svc-jf4n4 Dec 28 22:04:07.062: INFO: Got endpoints: latency-svc-jf4n4 [1.817765575s] Dec 28 22:04:07.153: INFO: Created: latency-svc-dvg97 Dec 28 22:04:07.222: INFO: Created: latency-svc-q8nh5 Dec 28 22:04:07.223: INFO: Got endpoints: latency-svc-dvg97 [1.929485063s] Dec 28 22:04:08.021: INFO: Got endpoints: latency-svc-q8nh5 [2.614587971s] Dec 28 22:04:08.076: INFO: Created: latency-svc-fgmxv Dec 28 22:04:08.082: INFO: Got endpoints: latency-svc-fgmxv [2.642088535s] Dec 28 22:04:08.205: INFO: Created: latency-svc-rrgx5 Dec 28 22:04:08.215: INFO: Got endpoints: latency-svc-rrgx5 [2.739489033s] Dec 28 22:04:08.244: INFO: Created: latency-svc-dkr96 Dec 28 22:04:08.257: INFO: Got endpoints: latency-svc-dkr96 [2.682479627s] Dec 28 22:04:08.289: INFO: Created: latency-svc-6gsdm Dec 28 22:04:08.291: INFO: Got endpoints: latency-svc-6gsdm [2.697608915s] Dec 28 22:04:08.373: INFO: Created: latency-svc-s8w8t Dec 28 22:04:08.374: INFO: Got endpoints: latency-svc-s8w8t [2.739775834s] Dec 28 22:04:08.434: INFO: Created: latency-svc-xg5fl Dec 28 22:04:08.452: INFO: Got endpoints: latency-svc-xg5fl [2.634718714s] Dec 28 22:04:08.528: INFO: Created: latency-svc-h6v5h Dec 28 22:04:08.534: INFO: Got endpoints: latency-svc-h6v5h [2.510457927s] Dec 28 22:04:08.582: INFO: Created: latency-svc-pf4tl Dec 28 22:04:08.583: INFO: Got endpoints: latency-svc-pf4tl [2.32523496s] Dec 28 22:04:08.831: INFO: Created: latency-svc-2f88p Dec 28 22:04:08.838: INFO: Got endpoints: latency-svc-2f88p [2.499108058s] Dec 28 22:04:08.898: INFO: Created: latency-svc-c88g8 Dec 28 22:04:08.910: INFO: Got endpoints: latency-svc-c88g8 [2.325669861s] Dec 28 22:04:09.004: INFO: Created: latency-svc-jjx7v Dec 28 22:04:09.012: INFO: Got endpoints: latency-svc-jjx7v [2.360924986s] Dec 28 22:04:09.046: INFO: Created: latency-svc-mwlzs Dec 28 22:04:09.049: INFO: Got endpoints: latency-svc-mwlzs [2.074240255s] Dec 28 22:04:09.081: INFO: Created: latency-svc-ds447 Dec 28 22:04:09.092: INFO: Got endpoints: latency-svc-ds447 [2.029186821s] Dec 28 22:04:09.258: INFO: Created: latency-svc-qg4k9 Dec 28 22:04:09.293: INFO: Got endpoints: latency-svc-qg4k9 [2.070228964s] Dec 28 22:04:09.310: INFO: Created: latency-svc-rmwvb Dec 28 22:04:09.317: INFO: Got endpoints: latency-svc-rmwvb [1.295571023s] Dec 28 22:04:09.424: INFO: Created: latency-svc-w7nll Dec 28 22:04:09.435: INFO: Got endpoints: latency-svc-w7nll [1.352553734s] Dec 28 22:04:09.477: INFO: Created: latency-svc-mpr9r Dec 28 22:04:09.484: INFO: Got endpoints: latency-svc-mpr9r [1.268531578s] Dec 28 22:04:09.517: INFO: Created: latency-svc-56rj6 Dec 28 22:04:09.517: INFO: Got endpoints: latency-svc-56rj6 [1.259890384s] Dec 28 22:04:09.681: INFO: Created: latency-svc-bp8sj Dec 28 22:04:09.835: INFO: Got endpoints: latency-svc-bp8sj [1.543378739s] Dec 28 22:04:09.861: INFO: Created: latency-svc-dc7m5 Dec 28 22:04:09.874: INFO: Got endpoints: latency-svc-dc7m5 [1.500505551s] Dec 28 22:04:09.907: INFO: Created: latency-svc-56scr Dec 28 22:04:09.909: INFO: Got endpoints: latency-svc-56scr [1.456982069s] Dec 28 22:04:10.034: INFO: Created: latency-svc-klrmb Dec 28 22:04:10.063: INFO: Got endpoints: latency-svc-klrmb [1.528320327s] Dec 28 22:04:10.118: INFO: Created: latency-svc-mf5sc Dec 28 22:04:10.129: INFO: Got endpoints: latency-svc-mf5sc [1.546326949s] Dec 28 22:04:10.250: INFO: Created: latency-svc-btsp6 Dec 28 22:04:10.256: INFO: Got endpoints: latency-svc-btsp6 [1.41849988s] Dec 28 22:04:10.300: INFO: Created: latency-svc-89h58 Dec 28 22:04:10.313: INFO: Got endpoints: latency-svc-89h58 [1.402963498s] Dec 28 22:04:10.343: INFO: Created: latency-svc-rcs7w Dec 28 22:04:10.420: INFO: Got endpoints: latency-svc-rcs7w [1.408059681s] Dec 28 22:04:10.434: INFO: Created: latency-svc-84fr7 Dec 28 22:04:10.437: INFO: Got endpoints: latency-svc-84fr7 [1.387801192s] Dec 28 22:04:10.483: INFO: Created: latency-svc-vbqlk Dec 28 22:04:10.493: INFO: Got endpoints: latency-svc-vbqlk [1.401532763s] Dec 28 22:04:10.512: INFO: Created: latency-svc-qnf5c Dec 28 22:04:10.587: INFO: Got endpoints: latency-svc-qnf5c [1.293037968s] Dec 28 22:04:10.597: INFO: Created: latency-svc-fsq5d Dec 28 22:04:10.601: INFO: Got endpoints: latency-svc-fsq5d [1.283384996s] Dec 28 22:04:10.666: INFO: Created: latency-svc-9n7k8 Dec 28 22:04:10.672: INFO: Got endpoints: latency-svc-9n7k8 [1.235967586s] Dec 28 22:04:10.830: INFO: Created: latency-svc-fhjf5 Dec 28 22:04:10.831: INFO: Created: latency-svc-ttfrj Dec 28 22:04:10.855: INFO: Got endpoints: latency-svc-ttfrj [1.337879152s] Dec 28 22:04:10.865: INFO: Got endpoints: latency-svc-fhjf5 [1.380522347s] Dec 28 22:04:10.909: INFO: Created: latency-svc-rvp59 Dec 28 22:04:11.013: INFO: Got endpoints: latency-svc-rvp59 [1.17770588s] Dec 28 22:04:11.017: INFO: Created: latency-svc-4jvbd Dec 28 22:04:11.021: INFO: Got endpoints: latency-svc-4jvbd [1.14686091s] Dec 28 22:04:11.060: INFO: Created: latency-svc-c8npz Dec 28 22:04:11.066: INFO: Got endpoints: latency-svc-c8npz [1.156656226s] Dec 28 22:04:11.090: INFO: Created: latency-svc-tv6pv Dec 28 22:04:11.211: INFO: Got endpoints: latency-svc-tv6pv [1.147878642s] Dec 28 22:04:11.214: INFO: Created: latency-svc-ljbrj Dec 28 22:04:11.320: INFO: Got endpoints: latency-svc-ljbrj [1.190267426s] Dec 28 22:04:11.382: INFO: Created: latency-svc-wdq47 Dec 28 22:04:11.382: INFO: Got endpoints: latency-svc-wdq47 [1.125500632s] Dec 28 22:04:11.430: INFO: Created: latency-svc-h9dhj Dec 28 22:04:11.451: INFO: Got endpoints: latency-svc-h9dhj [1.137830642s] Dec 28 22:04:11.608: INFO: Created: latency-svc-dvhsw Dec 28 22:04:11.623: INFO: Got endpoints: latency-svc-dvhsw [1.202848106s] Dec 28 22:04:11.658: INFO: Created: latency-svc-w4nzt Dec 28 22:04:11.690: INFO: Got endpoints: latency-svc-w4nzt [1.252996688s] Dec 28 22:04:11.816: INFO: Created: latency-svc-5pmtf Dec 28 22:04:11.833: INFO: Got endpoints: latency-svc-5pmtf [1.33946816s] Dec 28 22:04:11.880: INFO: Created: latency-svc-8bxsk Dec 28 22:04:11.882: INFO: Got endpoints: latency-svc-8bxsk [1.294834547s] Dec 28 22:04:12.024: INFO: Created: latency-svc-smhs5 Dec 28 22:04:12.039: INFO: Got endpoints: latency-svc-smhs5 [1.437824756s] Dec 28 22:04:12.102: INFO: Created: latency-svc-9hlnz Dec 28 22:04:12.222: INFO: Got endpoints: latency-svc-9hlnz [1.550062845s] Dec 28 22:04:12.230: INFO: Created: latency-svc-5bbx6 Dec 28 22:04:12.237: INFO: Got endpoints: latency-svc-5bbx6 [1.380888366s] Dec 28 22:04:12.289: INFO: Created: latency-svc-tnb25 Dec 28 22:04:12.305: INFO: Got endpoints: latency-svc-tnb25 [1.439991365s] Dec 28 22:04:12.433: INFO: Created: latency-svc-h9ldt Dec 28 22:04:12.436: INFO: Got endpoints: latency-svc-h9ldt [1.423381658s] Dec 28 22:04:12.471: INFO: Created: latency-svc-j6rzw Dec 28 22:04:12.474: INFO: Got endpoints: latency-svc-j6rzw [1.45258853s] Dec 28 22:04:12.523: INFO: Created: latency-svc-vvb24 Dec 28 22:04:12.640: INFO: Got endpoints: latency-svc-vvb24 [1.573653735s] Dec 28 22:04:12.673: INFO: Created: latency-svc-b6xc9 Dec 28 22:04:12.678: INFO: Got endpoints: latency-svc-b6xc9 [1.466013194s] Dec 28 22:04:12.855: INFO: Created: latency-svc-6jbgm Dec 28 22:04:12.871: INFO: Got endpoints: latency-svc-6jbgm [1.550579125s] Dec 28 22:04:12.945: INFO: Created: latency-svc-l5mmd Dec 28 22:04:13.022: INFO: Got endpoints: latency-svc-l5mmd [1.639881856s] Dec 28 22:04:13.038: INFO: Created: latency-svc-wt2vh Dec 28 22:04:13.068: INFO: Got endpoints: latency-svc-wt2vh [1.616843473s] Dec 28 22:04:13.087: INFO: Created: latency-svc-jrqwm Dec 28 22:04:13.112: INFO: Got endpoints: latency-svc-jrqwm [1.488948432s] Dec 28 22:04:13.216: INFO: Created: latency-svc-ddjf2 Dec 28 22:04:13.219: INFO: Got endpoints: latency-svc-ddjf2 [1.528637876s] Dec 28 22:04:13.282: INFO: Created: latency-svc-48vmm Dec 28 22:04:13.294: INFO: Got endpoints: latency-svc-48vmm [1.460538025s] Dec 28 22:04:13.402: INFO: Created: latency-svc-9fg6b Dec 28 22:04:13.407: INFO: Got endpoints: latency-svc-9fg6b [1.525543245s] Dec 28 22:04:13.438: INFO: Created: latency-svc-mwg89 Dec 28 22:04:13.463: INFO: Got endpoints: latency-svc-mwg89 [1.423822195s] Dec 28 22:04:13.488: INFO: Created: latency-svc-5bt7l Dec 28 22:04:13.569: INFO: Got endpoints: latency-svc-5bt7l [1.346254691s] Dec 28 22:04:13.576: INFO: Created: latency-svc-j7s8m Dec 28 22:04:13.581: INFO: Got endpoints: latency-svc-j7s8m [1.344044614s] Dec 28 22:04:13.638: INFO: Created: latency-svc-fsphh Dec 28 22:04:13.645: INFO: Got endpoints: latency-svc-fsphh [1.339314979s] Dec 28 22:04:13.757: INFO: Created: latency-svc-q2ks5 Dec 28 22:04:13.758: INFO: Got endpoints: latency-svc-q2ks5 [1.321014415s] Dec 28 22:04:13.924: INFO: Created: latency-svc-r287d Dec 28 22:04:13.925: INFO: Got endpoints: latency-svc-r287d [1.450285426s] Dec 28 22:04:13.982: INFO: Created: latency-svc-sm8dl Dec 28 22:04:13.998: INFO: Got endpoints: latency-svc-sm8dl [1.357827298s] Dec 28 22:04:14.182: INFO: Created: latency-svc-6n7pp Dec 28 22:04:14.190: INFO: Got endpoints: latency-svc-6n7pp [1.51239657s] Dec 28 22:04:14.237: INFO: Created: latency-svc-89rxz Dec 28 22:04:14.256: INFO: Got endpoints: latency-svc-89rxz [1.384972745s] Dec 28 22:04:14.383: INFO: Created: latency-svc-kxv2c Dec 28 22:04:14.401: INFO: Got endpoints: latency-svc-kxv2c [1.37900827s] Dec 28 22:04:14.424: INFO: Created: latency-svc-hblzm Dec 28 22:04:14.427: INFO: Got endpoints: latency-svc-hblzm [1.35894355s] Dec 28 22:04:14.469: INFO: Created: latency-svc-cmjxm Dec 28 22:04:14.478: INFO: Got endpoints: latency-svc-cmjxm [1.365073591s] Dec 28 22:04:14.565: INFO: Created: latency-svc-6944b Dec 28 22:04:14.577: INFO: Got endpoints: latency-svc-6944b [1.358112105s] Dec 28 22:04:14.600: INFO: Created: latency-svc-vjj6g Dec 28 22:04:14.612: INFO: Got endpoints: latency-svc-vjj6g [1.318275155s] Dec 28 22:04:14.701: INFO: Created: latency-svc-hxm2f Dec 28 22:04:14.712: INFO: Got endpoints: latency-svc-hxm2f [1.303987734s] Dec 28 22:04:14.740: INFO: Created: latency-svc-gdpmb Dec 28 22:04:14.752: INFO: Got endpoints: latency-svc-gdpmb [1.288434169s] Dec 28 22:04:14.794: INFO: Created: latency-svc-xnl6m Dec 28 22:04:14.912: INFO: Got endpoints: latency-svc-xnl6m [1.342754132s] Dec 28 22:04:14.921: INFO: Created: latency-svc-cdmrn Dec 28 22:04:14.923: INFO: Got endpoints: latency-svc-cdmrn [1.34196494s] Dec 28 22:04:14.967: INFO: Created: latency-svc-xwpk8 Dec 28 22:04:14.974: INFO: Got endpoints: latency-svc-xwpk8 [1.328646288s] Dec 28 22:04:15.016: INFO: Created: latency-svc-hbmbh Dec 28 22:04:15.092: INFO: Got endpoints: latency-svc-hbmbh [1.334505908s] Dec 28 22:04:15.101: INFO: Created: latency-svc-q9z8z Dec 28 22:04:15.121: INFO: Got endpoints: latency-svc-q9z8z [1.196474822s] Dec 28 22:04:15.133: INFO: Created: latency-svc-6lkdh Dec 28 22:04:15.142: INFO: Got endpoints: latency-svc-6lkdh [1.143081667s] Dec 28 22:04:15.185: INFO: Created: latency-svc-2v4md Dec 28 22:04:15.191: INFO: Got endpoints: latency-svc-2v4md [1.000233573s] Dec 28 22:04:15.332: INFO: Created: latency-svc-xr74h Dec 28 22:04:15.367: INFO: Got endpoints: latency-svc-xr74h [1.110595745s] Dec 28 22:04:15.367: INFO: Created: latency-svc-llzgd Dec 28 22:04:15.380: INFO: Got endpoints: latency-svc-llzgd [979.180398ms] Dec 28 22:04:15.425: INFO: Created: latency-svc-plvjg Dec 28 22:04:15.426: INFO: Got endpoints: latency-svc-plvjg [998.438387ms] Dec 28 22:04:15.489: INFO: Created: latency-svc-9ctcd Dec 28 22:04:15.494: INFO: Got endpoints: latency-svc-9ctcd [1.016131919s] Dec 28 22:04:15.520: INFO: Created: latency-svc-8fhbc Dec 28 22:04:15.520: INFO: Got endpoints: latency-svc-8fhbc [941.867972ms] Dec 28 22:04:15.543: INFO: Created: latency-svc-29g2k Dec 28 22:04:15.547: INFO: Got endpoints: latency-svc-29g2k [934.489811ms] Dec 28 22:04:15.572: INFO: Created: latency-svc-cmqvs Dec 28 22:04:15.581: INFO: Got endpoints: latency-svc-cmqvs [868.995368ms] Dec 28 22:04:15.661: INFO: Created: latency-svc-4dvgs Dec 28 22:04:15.697: INFO: Got endpoints: latency-svc-4dvgs [945.473977ms] Dec 28 22:04:15.706: INFO: Created: latency-svc-7spfv Dec 28 22:04:15.710: INFO: Got endpoints: latency-svc-7spfv [798.086957ms] Dec 28 22:04:15.746: INFO: Created: latency-svc-whtq4 Dec 28 22:04:15.898: INFO: Got endpoints: latency-svc-whtq4 [975.303732ms] Dec 28 22:04:15.916: INFO: Created: latency-svc-psgp7 Dec 28 22:04:15.933: INFO: Got endpoints: latency-svc-psgp7 [958.72563ms] Dec 28 22:04:15.975: INFO: Created: latency-svc-v9njr Dec 28 22:04:15.983: INFO: Got endpoints: latency-svc-v9njr [890.235992ms] Dec 28 22:04:16.129: INFO: Created: latency-svc-bphn5 Dec 28 22:04:16.135: INFO: Got endpoints: latency-svc-bphn5 [1.013881742s] Dec 28 22:04:16.174: INFO: Created: latency-svc-f2khm Dec 28 22:04:16.179: INFO: Got endpoints: latency-svc-f2khm [1.03603315s] Dec 28 22:04:16.380: INFO: Created: latency-svc-fbfcd Dec 28 22:04:16.393: INFO: Got endpoints: latency-svc-fbfcd [1.201569376s] Dec 28 22:04:16.458: INFO: Created: latency-svc-s9642 Dec 28 22:04:16.458: INFO: Got endpoints: latency-svc-s9642 [1.091150613s] Dec 28 22:04:16.554: INFO: Created: latency-svc-j82cz Dec 28 22:04:16.580: INFO: Got endpoints: latency-svc-j82cz [1.199204322s] Dec 28 22:04:16.604: INFO: Created: latency-svc-fttgh Dec 28 22:04:16.604: INFO: Got endpoints: latency-svc-fttgh [1.177994157s] Dec 28 22:04:16.637: INFO: Created: latency-svc-kcz6h Dec 28 22:04:16.644: INFO: Got endpoints: latency-svc-kcz6h [1.149587177s] Dec 28 22:04:16.710: INFO: Created: latency-svc-bhlmd Dec 28 22:04:16.719: INFO: Got endpoints: latency-svc-bhlmd [1.199041007s] Dec 28 22:04:16.762: INFO: Created: latency-svc-gj98k Dec 28 22:04:16.762: INFO: Got endpoints: latency-svc-gj98k [1.215052415s] Dec 28 22:04:16.794: INFO: Created: latency-svc-29ct4 Dec 28 22:04:16.794: INFO: Got endpoints: latency-svc-29ct4 [1.213318732s] Dec 28 22:04:16.922: INFO: Created: latency-svc-2s6bp Dec 28 22:04:16.923: INFO: Got endpoints: latency-svc-2s6bp [1.225317822s] Dec 28 22:04:16.948: INFO: Created: latency-svc-zgrqv Dec 28 22:04:16.955: INFO: Got endpoints: latency-svc-zgrqv [1.244860241s] Dec 28 22:04:16.995: INFO: Created: latency-svc-5dlsx Dec 28 22:04:17.087: INFO: Got endpoints: latency-svc-5dlsx [1.187798558s] Dec 28 22:04:17.089: INFO: Created: latency-svc-8jv6c Dec 28 22:04:17.092: INFO: Got endpoints: latency-svc-8jv6c [1.158124589s] Dec 28 22:04:17.159: INFO: Created: latency-svc-kf4hr Dec 28 22:04:17.172: INFO: Got endpoints: latency-svc-kf4hr [1.189215307s] Dec 28 22:04:17.265: INFO: Created: latency-svc-9rz88 Dec 28 22:04:17.302: INFO: Got endpoints: latency-svc-9rz88 [1.16646117s] Dec 28 22:04:17.333: INFO: Created: latency-svc-76lp6 Dec 28 22:04:17.348: INFO: Got endpoints: latency-svc-76lp6 [1.169141888s] Dec 28 22:04:17.457: INFO: Created: latency-svc-q772v Dec 28 22:04:17.457: INFO: Got endpoints: latency-svc-q772v [1.064391875s] Dec 28 22:04:17.493: INFO: Created: latency-svc-gjk2p Dec 28 22:04:17.494: INFO: Got endpoints: latency-svc-gjk2p [1.035030693s] Dec 28 22:04:17.532: INFO: Created: latency-svc-2942t Dec 28 22:04:17.537: INFO: Got endpoints: latency-svc-2942t [956.71515ms] Dec 28 22:04:17.615: INFO: Created: latency-svc-96l5g Dec 28 22:04:17.625: INFO: Got endpoints: latency-svc-96l5g [1.02057006s] Dec 28 22:04:17.679: INFO: Created: latency-svc-9qdqp Dec 28 22:04:17.696: INFO: Got endpoints: latency-svc-9qdqp [1.051600274s] Dec 28 22:04:17.706: INFO: Created: latency-svc-hlnxc Dec 28 22:04:17.714: INFO: Got endpoints: latency-svc-hlnxc [994.705868ms] Dec 28 22:04:17.788: INFO: Created: latency-svc-6wq5t Dec 28 22:04:17.829: INFO: Created: latency-svc-2snnk Dec 28 22:04:17.835: INFO: Got endpoints: latency-svc-6wq5t [1.072871938s] Dec 28 22:04:17.837: INFO: Got endpoints: latency-svc-2snnk [1.042456318s] Dec 28 22:04:17.939: INFO: Created: latency-svc-q2p6c Dec 28 22:04:17.942: INFO: Got endpoints: latency-svc-q2p6c [1.018977884s] Dec 28 22:04:17.994: INFO: Created: latency-svc-fnznf Dec 28 22:04:18.006: INFO: Got endpoints: latency-svc-fnznf [1.050675936s] Dec 28 22:04:18.006: INFO: Latencies: [92.975193ms 127.693258ms 235.012583ms 267.906705ms 325.874925ms 460.605325ms 513.543317ms 632.133493ms 653.516469ms 708.329119ms 798.086957ms 846.328354ms 868.995368ms 890.235992ms 916.314155ms 934.489811ms 941.867972ms 945.473977ms 956.71515ms 958.72563ms 975.303732ms 979.180398ms 988.212942ms 994.705868ms 998.438387ms 1.000233573s 1.005628444s 1.013881742s 1.016131919s 1.018977884s 1.02057006s 1.021834152s 1.026275929s 1.035030693s 1.03603315s 1.042456318s 1.045362449s 1.050675936s 1.051600274s 1.053927471s 1.058714914s 1.064391875s 1.072871938s 1.079804587s 1.08023826s 1.08179015s 1.085717888s 1.091150613s 1.096114073s 1.105898271s 1.110595745s 1.113068485s 1.116152755s 1.125500632s 1.132780353s 1.135284666s 1.137830642s 1.143081667s 1.14686091s 1.147878642s 1.149587177s 1.156656226s 1.158124589s 1.16646117s 1.169141888s 1.17770588s 1.177994157s 1.182687046s 1.187051194s 1.187798558s 1.188840699s 1.189215307s 1.190267426s 1.192721672s 1.193046358s 1.196474822s 1.197562793s 1.199041007s 1.199204322s 1.201569376s 1.202848106s 1.213318732s 1.215052415s 1.219076045s 1.223435335s 1.225317822s 1.229837345s 1.235967586s 1.244860241s 1.249910306s 1.252035723s 1.252717916s 1.252996688s 1.258978451s 1.259890384s 1.268531578s 1.27008424s 1.270710944s 1.270787204s 1.283384996s 1.288434169s 1.290514248s 1.293037968s 1.294834547s 1.295571023s 1.303987734s 1.318275155s 1.321014415s 1.324887123s 1.328646288s 1.334505908s 1.337301341s 1.337879152s 1.338814227s 1.339314979s 1.33946816s 1.34196494s 1.342754132s 1.344044614s 1.346254691s 1.350715973s 1.352553734s 1.356195834s 1.357827298s 1.358112105s 1.35894355s 1.362855898s 1.363173177s 1.365073591s 1.373548547s 1.375070871s 1.378920471s 1.37900827s 1.37976794s 1.380522347s 1.380888366s 1.383541581s 1.384972745s 1.387801192s 1.390918306s 1.395074847s 1.401532763s 1.402292922s 1.402963498s 1.408059681s 1.416263792s 1.41849988s 1.419359605s 1.423381658s 1.423822195s 1.426121659s 1.432926544s 1.437824756s 1.439991365s 1.443859726s 1.450285426s 1.45258853s 1.454936121s 1.456982069s 1.460538025s 1.463697278s 1.466013194s 1.481779314s 1.48848681s 1.488948432s 1.499222191s 1.500505551s 1.500847896s 1.51239657s 1.525543245s 1.527591529s 1.528320327s 1.528637876s 1.543378739s 1.546326949s 1.550062845s 1.55030208s 1.550579125s 1.573653735s 1.573799998s 1.616843473s 1.639881856s 1.801617859s 1.817765575s 1.929485063s 2.029186821s 2.070228964s 2.074240255s 2.32523496s 2.325669861s 2.360924986s 2.499108058s 2.510457927s 2.614587971s 2.634718714s 2.642088535s 2.682479627s 2.697608915s 2.739489033s 2.739775834s] Dec 28 22:04:18.006: INFO: 50 %ile: 1.288434169s Dec 28 22:04:18.006: INFO: 90 %ile: 1.616843473s Dec 28 22:04:18.006: INFO: 99 %ile: 2.739489033s Dec 28 22:04:18.006: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:04:18.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-5597" for this suite. • [SLOW TEST:26.452 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":146,"skipped":2232,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:04:18.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 28 22:04:18.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Dec 28 22:04:19.137: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2019-12-28T22:04:19Z generation:1 name:name1 resourceVersion:10434481 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:fd95f2bc-7a65-44a9-b02b-d73cdb37d49e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Dec 28 22:04:29.186: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2019-12-28T22:04:29Z generation:1 name:name2 resourceVersion:10434823 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:8f5c8549-446e-4d05-b7db-ef2caf0cd819] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Dec 28 22:04:39.197: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2019-12-28T22:04:19Z generation:2 name:name1 resourceVersion:10435029 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:fd95f2bc-7a65-44a9-b02b-d73cdb37d49e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Dec 28 22:04:49.347: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2019-12-28T22:04:29Z generation:2 name:name2 resourceVersion:10435267 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:8f5c8549-446e-4d05-b7db-ef2caf0cd819] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Dec 28 22:04:59.365: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2019-12-28T22:04:19Z generation:2 name:name1 resourceVersion:10435461 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:fd95f2bc-7a65-44a9-b02b-d73cdb37d49e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Dec 28 22:05:09.393: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2019-12-28T22:04:29Z generation:2 name:name2 resourceVersion:10435506 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:8f5c8549-446e-4d05-b7db-ef2caf0cd819] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:05:19.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-9088" for this suite. • [SLOW TEST:61.928 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":147,"skipped":2246,"failed":0} [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:05:19.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-shb4 STEP: Creating a pod to test atomic-volume-subpath Dec 28 22:05:20.094: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-shb4" in namespace "subpath-5425" to be "success or failure" Dec 28 22:05:20.097: INFO: Pod "pod-subpath-test-configmap-shb4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.509097ms Dec 28 22:05:22.105: INFO: Pod "pod-subpath-test-configmap-shb4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011076078s Dec 28 22:05:24.111: INFO: Pod "pod-subpath-test-configmap-shb4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016769125s Dec 28 22:05:26.121: INFO: Pod "pod-subpath-test-configmap-shb4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027590915s Dec 28 22:05:28.132: INFO: Pod "pod-subpath-test-configmap-shb4": Phase="Running", Reason="", readiness=true. Elapsed: 8.038059501s Dec 28 22:05:30.141: INFO: Pod "pod-subpath-test-configmap-shb4": Phase="Running", Reason="", readiness=true. Elapsed: 10.047297544s Dec 28 22:05:32.153: INFO: Pod "pod-subpath-test-configmap-shb4": Phase="Running", Reason="", readiness=true. Elapsed: 12.059219547s Dec 28 22:05:34.159: INFO: Pod "pod-subpath-test-configmap-shb4": Phase="Running", Reason="", readiness=true. Elapsed: 14.065464552s Dec 28 22:05:36.169: INFO: Pod "pod-subpath-test-configmap-shb4": Phase="Running", Reason="", readiness=true. Elapsed: 16.075247627s Dec 28 22:05:38.177: INFO: Pod "pod-subpath-test-configmap-shb4": Phase="Running", Reason="", readiness=true. Elapsed: 18.082869314s Dec 28 22:05:40.185: INFO: Pod "pod-subpath-test-configmap-shb4": Phase="Running", Reason="", readiness=true. Elapsed: 20.091378247s Dec 28 22:05:42.221: INFO: Pod "pod-subpath-test-configmap-shb4": Phase="Running", Reason="", readiness=true. Elapsed: 22.127084565s Dec 28 22:05:44.231: INFO: Pod "pod-subpath-test-configmap-shb4": Phase="Running", Reason="", readiness=true. Elapsed: 24.137333991s Dec 28 22:05:46.605: INFO: Pod "pod-subpath-test-configmap-shb4": Phase="Running", Reason="", readiness=true. Elapsed: 26.511410968s Dec 28 22:05:48.624: INFO: Pod "pod-subpath-test-configmap-shb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.530048271s STEP: Saw pod success Dec 28 22:05:48.624: INFO: Pod "pod-subpath-test-configmap-shb4" satisfied condition "success or failure" Dec 28 22:05:48.631: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-shb4 container test-container-subpath-configmap-shb4: STEP: delete the pod Dec 28 22:05:48.695: INFO: Waiting for pod pod-subpath-test-configmap-shb4 to disappear Dec 28 22:05:48.703: INFO: Pod pod-subpath-test-configmap-shb4 no longer exists STEP: Deleting pod pod-subpath-test-configmap-shb4 Dec 28 22:05:48.703: INFO: Deleting pod "pod-subpath-test-configmap-shb4" in namespace "subpath-5425" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:05:48.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5425" for this suite. • [SLOW TEST:28.876 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":148,"skipped":2246,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:05:48.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Dec 28 22:05:48.978: INFO: Waiting up to 5m0s for pod "downward-api-6b37ed13-462d-4d5e-b582-7d27c0c77e64" in namespace "downward-api-7936" to be "success or failure" Dec 28 22:05:49.010: INFO: Pod "downward-api-6b37ed13-462d-4d5e-b582-7d27c0c77e64": Phase="Pending", Reason="", readiness=false. Elapsed: 31.704999ms Dec 28 22:05:51.017: INFO: Pod "downward-api-6b37ed13-462d-4d5e-b582-7d27c0c77e64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039613945s Dec 28 22:05:53.023: INFO: Pod "downward-api-6b37ed13-462d-4d5e-b582-7d27c0c77e64": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045505591s Dec 28 22:05:55.043: INFO: Pod "downward-api-6b37ed13-462d-4d5e-b582-7d27c0c77e64": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065082578s Dec 28 22:05:57.051: INFO: Pod "downward-api-6b37ed13-462d-4d5e-b582-7d27c0c77e64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.07338719s STEP: Saw pod success Dec 28 22:05:57.052: INFO: Pod "downward-api-6b37ed13-462d-4d5e-b582-7d27c0c77e64" satisfied condition "success or failure" Dec 28 22:05:57.056: INFO: Trying to get logs from node jerma-node pod downward-api-6b37ed13-462d-4d5e-b582-7d27c0c77e64 container dapi-container: STEP: delete the pod Dec 28 22:05:57.090: INFO: Waiting for pod downward-api-6b37ed13-462d-4d5e-b582-7d27c0c77e64 to disappear Dec 28 22:05:57.114: INFO: Pod downward-api-6b37ed13-462d-4d5e-b582-7d27c0c77e64 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:05:57.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7936" for this suite. • [SLOW TEST:8.285 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":149,"skipped":2398,"failed":0} SSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:05:57.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Dec 28 22:06:03.831: INFO: Successfully updated pod "pod-update-79e6e35e-6150-4b97-a775-a316ee620ac0" STEP: verifying the updated pod is in kubernetes Dec 28 22:06:03.846: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:06:03.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5551" for this suite. • [SLOW TEST:6.740 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":150,"skipped":2404,"failed":0} SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:06:03.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-6757 STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 28 22:06:03.982: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Dec 28 22:06:40.241: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.2 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6757 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 28 22:06:40.241: INFO: >>> kubeConfig: /root/.kube/config Dec 28 22:06:41.524: INFO: Found all expected endpoints: [netserver-0] Dec 28 22:06:41.538: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6757 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 28 22:06:41.538: INFO: >>> kubeConfig: /root/.kube/config Dec 28 22:06:42.742: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:06:42.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6757" for this suite. • [SLOW TEST:38.895 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":151,"skipped":2408,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:06:42.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 28 22:06:42.928: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Dec 28 22:06:46.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4648 create -f -' Dec 28 22:06:50.398: INFO: stderr: "" Dec 28 22:06:50.398: INFO: stdout: "e2e-test-crd-publish-openapi-4638-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Dec 28 22:06:50.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4648 delete e2e-test-crd-publish-openapi-4638-crds test-cr' Dec 28 22:06:50.698: INFO: stderr: "" Dec 28 22:06:50.698: INFO: stdout: "e2e-test-crd-publish-openapi-4638-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Dec 28 22:06:50.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4648 apply -f -' Dec 28 22:06:51.224: INFO: stderr: "" Dec 28 22:06:51.224: INFO: stdout: "e2e-test-crd-publish-openapi-4638-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Dec 28 22:06:51.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4648 delete e2e-test-crd-publish-openapi-4638-crds test-cr' Dec 28 22:06:51.448: INFO: stderr: "" Dec 28 22:06:51.449: INFO: stdout: "e2e-test-crd-publish-openapi-4638-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Dec 28 22:06:51.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4638-crds' Dec 28 22:06:51.932: INFO: stderr: "" Dec 28 22:06:51.933: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4638-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:06:55.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4648" for this suite. • [SLOW TEST:13.245 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":152,"skipped":2413,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:06:56.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-55e12786-516b-41d6-8cbf-f4d374a2b332 STEP: Creating a pod to test consume secrets Dec 28 22:06:56.111: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2b4e472d-4d2e-4f07-bda2-c73b5d6076ef" in namespace "projected-6015" to be "success or failure" Dec 28 22:06:56.155: INFO: Pod "pod-projected-secrets-2b4e472d-4d2e-4f07-bda2-c73b5d6076ef": Phase="Pending", Reason="", readiness=false. Elapsed: 43.905612ms Dec 28 22:06:58.162: INFO: Pod "pod-projected-secrets-2b4e472d-4d2e-4f07-bda2-c73b5d6076ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051579852s Dec 28 22:07:00.172: INFO: Pod "pod-projected-secrets-2b4e472d-4d2e-4f07-bda2-c73b5d6076ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061101757s Dec 28 22:07:02.180: INFO: Pod "pod-projected-secrets-2b4e472d-4d2e-4f07-bda2-c73b5d6076ef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068932419s Dec 28 22:07:04.198: INFO: Pod "pod-projected-secrets-2b4e472d-4d2e-4f07-bda2-c73b5d6076ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.087315125s STEP: Saw pod success Dec 28 22:07:04.198: INFO: Pod "pod-projected-secrets-2b4e472d-4d2e-4f07-bda2-c73b5d6076ef" satisfied condition "success or failure" Dec 28 22:07:04.204: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-2b4e472d-4d2e-4f07-bda2-c73b5d6076ef container projected-secret-volume-test: STEP: delete the pod Dec 28 22:07:04.269: INFO: Waiting for pod pod-projected-secrets-2b4e472d-4d2e-4f07-bda2-c73b5d6076ef to disappear Dec 28 22:07:04.277: INFO: Pod pod-projected-secrets-2b4e472d-4d2e-4f07-bda2-c73b5d6076ef no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:07:04.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6015" for this suite. • [SLOW TEST:8.282 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":153,"skipped":2414,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:07:04.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin Dec 28 22:07:04.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-462 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Dec 28 22:07:11.736: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n" Dec 28 22:07:11.736: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:07:13.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-462" for this suite. • [SLOW TEST:9.484 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1924 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":154,"skipped":2416,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:07:13.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Dec 28 22:07:13.992: INFO: Waiting up to 5m0s for pod "pod-c8cc1643-e4f5-434d-bfb4-2a68aff8ab75" in namespace "emptydir-5050" to be "success or failure" Dec 28 22:07:14.000: INFO: Pod "pod-c8cc1643-e4f5-434d-bfb4-2a68aff8ab75": Phase="Pending", Reason="", readiness=false. Elapsed: 7.43967ms Dec 28 22:07:16.016: INFO: Pod "pod-c8cc1643-e4f5-434d-bfb4-2a68aff8ab75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02391973s Dec 28 22:07:18.025: INFO: Pod "pod-c8cc1643-e4f5-434d-bfb4-2a68aff8ab75": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032642169s Dec 28 22:07:20.038: INFO: Pod "pod-c8cc1643-e4f5-434d-bfb4-2a68aff8ab75": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045763111s Dec 28 22:07:22.046: INFO: Pod "pod-c8cc1643-e4f5-434d-bfb4-2a68aff8ab75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.054111196s STEP: Saw pod success Dec 28 22:07:22.046: INFO: Pod "pod-c8cc1643-e4f5-434d-bfb4-2a68aff8ab75" satisfied condition "success or failure" Dec 28 22:07:22.049: INFO: Trying to get logs from node jerma-node pod pod-c8cc1643-e4f5-434d-bfb4-2a68aff8ab75 container test-container: STEP: delete the pod Dec 28 22:07:22.110: INFO: Waiting for pod pod-c8cc1643-e4f5-434d-bfb4-2a68aff8ab75 to disappear Dec 28 22:07:22.123: INFO: Pod pod-c8cc1643-e4f5-434d-bfb4-2a68aff8ab75 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:07:22.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5050" for this suite. • [SLOW TEST:8.423 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":155,"skipped":2435,"failed":0} SSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:07:22.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:07:22.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-8308" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":156,"skipped":2442,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:07:22.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-c917f075-81ae-44f1-b46a-09a83d65ed92 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:07:32.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7295" for this suite. • [SLOW TEST:10.198 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":157,"skipped":2446,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:07:32.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 28 22:07:33.584: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Dec 28 22:07:35.603: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167653, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167653, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167653, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167653, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 28 22:07:38.365: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167653, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167653, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167653, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167653, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 28 22:07:39.625: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167653, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167653, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167653, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167653, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 28 22:07:42.740: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:07:53.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9534" for this suite. STEP: Destroying namespace "webhook-9534-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:20.450 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":158,"skipped":2482,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:07:53.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode Dec 28 22:07:53.288: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5474" to be "success or failure" Dec 28 22:07:53.318: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 29.948827ms Dec 28 22:07:55.326: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038353743s Dec 28 22:07:57.342: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054058947s Dec 28 22:07:59.358: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069979435s Dec 28 22:08:01.363: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.075050203s Dec 28 22:08:03.374: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.08657499s STEP: Saw pod success Dec 28 22:08:03.375: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Dec 28 22:08:03.382: INFO: Trying to get logs from node jerma-node pod pod-host-path-test container test-container-1: STEP: delete the pod Dec 28 22:08:03.428: INFO: Waiting for pod pod-host-path-test to disappear Dec 28 22:08:03.435: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:08:03.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-5474" for this suite. • [SLOW TEST:10.290 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":159,"skipped":2496,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:08:03.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Dec 28 22:08:03.555: INFO: >>> kubeConfig: /root/.kube/config Dec 28 22:08:06.554: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:08:17.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4136" for this suite. • [SLOW TEST:14.144 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":160,"skipped":2508,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:08:17.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-f8fc6cd3-e829-4a07-97e1-19189efa1dec STEP: Creating a pod to test consume configMaps Dec 28 22:08:17.760: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-121921d7-21cc-4dfd-8731-acc5b6870d23" in namespace "projected-6144" to be "success or failure" Dec 28 22:08:17.788: INFO: Pod "pod-projected-configmaps-121921d7-21cc-4dfd-8731-acc5b6870d23": Phase="Pending", Reason="", readiness=false. Elapsed: 27.925019ms Dec 28 22:08:19.805: INFO: Pod "pod-projected-configmaps-121921d7-21cc-4dfd-8731-acc5b6870d23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044807233s Dec 28 22:08:21.816: INFO: Pod "pod-projected-configmaps-121921d7-21cc-4dfd-8731-acc5b6870d23": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055619669s Dec 28 22:08:23.833: INFO: Pod "pod-projected-configmaps-121921d7-21cc-4dfd-8731-acc5b6870d23": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072613709s Dec 28 22:08:25.844: INFO: Pod "pod-projected-configmaps-121921d7-21cc-4dfd-8731-acc5b6870d23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.083542565s STEP: Saw pod success Dec 28 22:08:25.844: INFO: Pod "pod-projected-configmaps-121921d7-21cc-4dfd-8731-acc5b6870d23" satisfied condition "success or failure" Dec 28 22:08:25.852: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-121921d7-21cc-4dfd-8731-acc5b6870d23 container projected-configmap-volume-test: STEP: delete the pod Dec 28 22:08:25.926: INFO: Waiting for pod pod-projected-configmaps-121921d7-21cc-4dfd-8731-acc5b6870d23 to disappear Dec 28 22:08:25.931: INFO: Pod pod-projected-configmaps-121921d7-21cc-4dfd-8731-acc5b6870d23 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:08:25.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6144" for this suite. • [SLOW TEST:8.349 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2530,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:08:25.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Dec 28 22:08:26.093: INFO: Waiting up to 5m0s for pod "pod-cfd0870b-8de6-4eeb-95ca-230e24befc47" in namespace "emptydir-6877" to be "success or failure" Dec 28 22:08:26.111: INFO: Pod "pod-cfd0870b-8de6-4eeb-95ca-230e24befc47": Phase="Pending", Reason="", readiness=false. Elapsed: 17.368043ms Dec 28 22:08:28.121: INFO: Pod "pod-cfd0870b-8de6-4eeb-95ca-230e24befc47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027063719s Dec 28 22:08:30.135: INFO: Pod "pod-cfd0870b-8de6-4eeb-95ca-230e24befc47": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041789134s Dec 28 22:08:32.154: INFO: Pod "pod-cfd0870b-8de6-4eeb-95ca-230e24befc47": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060019051s Dec 28 22:08:34.171: INFO: Pod "pod-cfd0870b-8de6-4eeb-95ca-230e24befc47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.077187933s STEP: Saw pod success Dec 28 22:08:34.171: INFO: Pod "pod-cfd0870b-8de6-4eeb-95ca-230e24befc47" satisfied condition "success or failure" Dec 28 22:08:34.176: INFO: Trying to get logs from node jerma-node pod pod-cfd0870b-8de6-4eeb-95ca-230e24befc47 container test-container: STEP: delete the pod Dec 28 22:08:34.226: INFO: Waiting for pod pod-cfd0870b-8de6-4eeb-95ca-230e24befc47 to disappear Dec 28 22:08:34.302: INFO: Pod pod-cfd0870b-8de6-4eeb-95ca-230e24befc47 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:08:34.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6877" for this suite. • [SLOW TEST:8.353 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2538,"failed":0} [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:08:34.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-jc9t STEP: Creating a pod to test atomic-volume-subpath Dec 28 22:08:34.525: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-jc9t" in namespace "subpath-5392" to be "success or failure" Dec 28 22:08:34.547: INFO: Pod "pod-subpath-test-configmap-jc9t": Phase="Pending", Reason="", readiness=false. Elapsed: 21.647231ms Dec 28 22:08:36.559: INFO: Pod "pod-subpath-test-configmap-jc9t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033403986s Dec 28 22:08:38.580: INFO: Pod "pod-subpath-test-configmap-jc9t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055127147s Dec 28 22:08:40.588: INFO: Pod "pod-subpath-test-configmap-jc9t": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062450154s Dec 28 22:08:42.609: INFO: Pod "pod-subpath-test-configmap-jc9t": Phase="Running", Reason="", readiness=true. Elapsed: 8.083666938s Dec 28 22:08:44.616: INFO: Pod "pod-subpath-test-configmap-jc9t": Phase="Running", Reason="", readiness=true. Elapsed: 10.090754325s Dec 28 22:08:46.632: INFO: Pod "pod-subpath-test-configmap-jc9t": Phase="Running", Reason="", readiness=true. Elapsed: 12.106428832s Dec 28 22:08:48.641: INFO: Pod "pod-subpath-test-configmap-jc9t": Phase="Running", Reason="", readiness=true. Elapsed: 14.115836999s Dec 28 22:08:50.650: INFO: Pod "pod-subpath-test-configmap-jc9t": Phase="Running", Reason="", readiness=true. Elapsed: 16.124499581s Dec 28 22:08:52.661: INFO: Pod "pod-subpath-test-configmap-jc9t": Phase="Running", Reason="", readiness=true. Elapsed: 18.135304682s Dec 28 22:08:54.679: INFO: Pod "pod-subpath-test-configmap-jc9t": Phase="Running", Reason="", readiness=true. Elapsed: 20.154066111s Dec 28 22:08:56.685: INFO: Pod "pod-subpath-test-configmap-jc9t": Phase="Running", Reason="", readiness=true. Elapsed: 22.159872994s Dec 28 22:08:58.696: INFO: Pod "pod-subpath-test-configmap-jc9t": Phase="Running", Reason="", readiness=true. Elapsed: 24.171163652s Dec 28 22:09:00.714: INFO: Pod "pod-subpath-test-configmap-jc9t": Phase="Running", Reason="", readiness=true. Elapsed: 26.189008011s Dec 28 22:09:02.734: INFO: Pod "pod-subpath-test-configmap-jc9t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.209109965s STEP: Saw pod success Dec 28 22:09:02.735: INFO: Pod "pod-subpath-test-configmap-jc9t" satisfied condition "success or failure" Dec 28 22:09:02.747: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-jc9t container test-container-subpath-configmap-jc9t: STEP: delete the pod Dec 28 22:09:02.885: INFO: Waiting for pod pod-subpath-test-configmap-jc9t to disappear Dec 28 22:09:02.904: INFO: Pod pod-subpath-test-configmap-jc9t no longer exists STEP: Deleting pod pod-subpath-test-configmap-jc9t Dec 28 22:09:02.904: INFO: Deleting pod "pod-subpath-test-configmap-jc9t" in namespace "subpath-5392" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:09:02.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5392" for this suite. • [SLOW TEST:28.633 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":163,"skipped":2538,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:09:02.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2832.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2832.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2832.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2832.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2832.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2832.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2832.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2832.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2832.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2832.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2832.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2832.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2832.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 139.21.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.21.139_udp@PTR;check="$$(dig +tcp +noall +answer +search 139.21.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.21.139_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2832.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2832.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2832.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2832.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2832.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2832.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2832.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2832.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2832.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2832.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2832.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2832.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2832.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 139.21.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.21.139_udp@PTR;check="$$(dig +tcp +noall +answer +search 139.21.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.21.139_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 28 22:09:13.305: INFO: Unable to read wheezy_udp@dns-test-service.dns-2832.svc.cluster.local from pod dns-2832/dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9: the server could not find the requested resource (get pods dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9) Dec 28 22:09:13.313: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2832.svc.cluster.local from pod dns-2832/dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9: the server could not find the requested resource (get pods dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9) Dec 28 22:09:13.319: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2832.svc.cluster.local from pod dns-2832/dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9: the server could not find the requested resource (get pods dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9) Dec 28 22:09:13.326: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2832.svc.cluster.local from pod dns-2832/dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9: the server could not find the requested resource (get pods dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9) Dec 28 22:09:13.331: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-2832.svc.cluster.local from pod dns-2832/dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9: the server could not find the requested resource (get pods dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9) Dec 28 22:09:13.334: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-2832.svc.cluster.local from pod dns-2832/dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9: the server could not find the requested resource (get pods dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9) Dec 28 22:09:13.338: INFO: Unable to read wheezy_udp@PodARecord from pod dns-2832/dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9: the server could not find the requested resource (get pods dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9) Dec 28 22:09:13.342: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-2832/dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9: the server could not find the requested resource (get pods dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9) Dec 28 22:09:13.345: INFO: Unable to read 10.109.21.139_udp@PTR from pod dns-2832/dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9: the server could not find the requested resource (get pods dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9) Dec 28 22:09:13.348: INFO: Unable to read 10.109.21.139_tcp@PTR from pod dns-2832/dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9: the server could not find the requested resource (get pods dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9) Dec 28 22:09:13.352: INFO: Unable to read jessie_udp@dns-test-service.dns-2832.svc.cluster.local from pod dns-2832/dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9: the server could not find the requested resource (get pods dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9) Dec 28 22:09:13.355: INFO: Unable to read jessie_tcp@dns-test-service.dns-2832.svc.cluster.local from pod dns-2832/dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9: the server could not find the requested resource (get pods dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9) Dec 28 22:09:13.358: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2832.svc.cluster.local from pod dns-2832/dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9: the server could not find the requested resource (get pods dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9) Dec 28 22:09:13.361: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2832.svc.cluster.local from pod dns-2832/dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9: the server could not find the requested resource (get pods dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9) Dec 28 22:09:13.365: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-2832.svc.cluster.local from pod dns-2832/dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9: the server could not find the requested resource (get pods dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9) Dec 28 22:09:13.370: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-2832.svc.cluster.local from pod dns-2832/dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9: the server could not find the requested resource (get pods dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9) Dec 28 22:09:13.374: INFO: Unable to read jessie_udp@PodARecord from pod dns-2832/dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9: the server could not find the requested resource (get pods dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9) Dec 28 22:09:13.376: INFO: Unable to read jessie_tcp@PodARecord from pod dns-2832/dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9: the server could not find the requested resource (get pods dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9) Dec 28 22:09:13.380: INFO: Unable to read 10.109.21.139_udp@PTR from pod dns-2832/dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9: the server could not find the requested resource (get pods dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9) Dec 28 22:09:13.384: INFO: Unable to read 10.109.21.139_tcp@PTR from pod dns-2832/dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9: the server could not find the requested resource (get pods dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9) Dec 28 22:09:13.384: INFO: Lookups using dns-2832/dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9 failed for: [wheezy_udp@dns-test-service.dns-2832.svc.cluster.local wheezy_tcp@dns-test-service.dns-2832.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2832.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2832.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-2832.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-2832.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.109.21.139_udp@PTR 10.109.21.139_tcp@PTR jessie_udp@dns-test-service.dns-2832.svc.cluster.local jessie_tcp@dns-test-service.dns-2832.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2832.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2832.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-2832.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-2832.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.109.21.139_udp@PTR 10.109.21.139_tcp@PTR] Dec 28 22:09:18.725: INFO: DNS probes using dns-2832/dns-test-1d7079ae-8e90-4bc5-b3a6-f5ce00c6f4a9 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:09:19.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2832" for this suite. • [SLOW TEST:16.238 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":164,"skipped":2555,"failed":0} SSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:09:19.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container Dec 28 22:09:27.966: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1095 pod-service-account-eb00033e-4940-4861-8c0f-23a4b69050a8 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Dec 28 22:09:28.362: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1095 pod-service-account-eb00033e-4940-4861-8c0f-23a4b69050a8 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Dec 28 22:09:28.773: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1095 pod-service-account-eb00033e-4940-4861-8c0f-23a4b69050a8 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:09:29.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1095" for this suite. • [SLOW TEST:9.980 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":165,"skipped":2561,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:09:29.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Dec 28 22:09:41.324: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-4948 PodName:pod-sharedvolume-1259953e-37f3-4eb9-aeaa-3523aa8bb00b ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 28 22:09:41.324: INFO: >>> kubeConfig: /root/.kube/config Dec 28 22:09:41.585: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:09:41.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4948" for this suite. • [SLOW TEST:12.423 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":166,"skipped":2564,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:09:41.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 28 22:09:42.488: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Dec 28 22:09:44.522: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167782, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167782, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167782, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167782, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 28 22:09:46.534: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167782, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167782, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167782, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167782, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 28 22:09:48.571: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167782, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167782, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167782, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167782, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 28 22:09:50.536: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167782, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167782, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167782, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167782, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 28 22:09:53.659: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 28 22:09:53.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4738" for this suite. STEP: Destroying namespace "webhook-4738-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.408 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":167,"skipped":2579,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 28 22:09:54.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 28 22:09:54.270: INFO: (0) /api/v1/nodes/jerma-node:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 19.149713ms)
Dec 28 22:09:54.277: INFO: (1) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.420479ms)
Dec 28 22:09:54.288: INFO: (2) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.667557ms)
Dec 28 22:09:54.294: INFO: (3) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.809641ms)
Dec 28 22:09:54.299: INFO: (4) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.061607ms)
Dec 28 22:09:54.305: INFO: (5) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.29399ms)
Dec 28 22:09:54.309: INFO: (6) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.609664ms)
Dec 28 22:09:54.314: INFO: (7) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.602311ms)
Dec 28 22:09:54.318: INFO: (8) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.761284ms)
Dec 28 22:09:54.322: INFO: (9) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.539192ms)
Dec 28 22:09:54.325: INFO: (10) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.106503ms)
Dec 28 22:09:54.328: INFO: (11) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.415065ms)
Dec 28 22:09:54.381: INFO: (12) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 52.849474ms)
Dec 28 22:09:54.388: INFO: (13) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.553165ms)
Dec 28 22:09:54.393: INFO: (14) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.535304ms)
Dec 28 22:09:54.396: INFO: (15) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.44809ms)
Dec 28 22:09:54.399: INFO: (16) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.905045ms)
Dec 28 22:09:54.402: INFO: (17) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.813281ms)
Dec 28 22:09:54.406: INFO: (18) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.811573ms)
Dec 28 22:09:54.409: INFO: (19) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.533825ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:09:54.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-2489" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]","total":278,"completed":168,"skipped":2606,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:09:54.427: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Dec 28 22:09:55.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7788'
Dec 28 22:09:55.612: INFO: stderr: ""
Dec 28 22:09:55.612: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 28 22:09:55.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7788'
Dec 28 22:09:55.992: INFO: stderr: ""
Dec 28 22:09:55.992: INFO: stdout: "update-demo-nautilus-2mgl5 update-demo-nautilus-tpl6d "
Dec 28 22:09:55.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2mgl5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7788'
Dec 28 22:09:56.162: INFO: stderr: ""
Dec 28 22:09:56.162: INFO: stdout: ""
Dec 28 22:09:56.162: INFO: update-demo-nautilus-2mgl5 is created but not running
Dec 28 22:10:01.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7788'
Dec 28 22:10:02.237: INFO: stderr: ""
Dec 28 22:10:02.237: INFO: stdout: "update-demo-nautilus-2mgl5 update-demo-nautilus-tpl6d "
Dec 28 22:10:02.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2mgl5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7788'
Dec 28 22:10:02.600: INFO: stderr: ""
Dec 28 22:10:02.600: INFO: stdout: ""
Dec 28 22:10:02.600: INFO: update-demo-nautilus-2mgl5 is created but not running
Dec 28 22:10:07.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7788'
Dec 28 22:10:07.787: INFO: stderr: ""
Dec 28 22:10:07.787: INFO: stdout: "update-demo-nautilus-2mgl5 update-demo-nautilus-tpl6d "
Dec 28 22:10:07.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2mgl5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7788'
Dec 28 22:10:07.946: INFO: stderr: ""
Dec 28 22:10:07.946: INFO: stdout: "true"
Dec 28 22:10:07.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2mgl5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7788'
Dec 28 22:10:08.045: INFO: stderr: ""
Dec 28 22:10:08.045: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 28 22:10:08.045: INFO: validating pod update-demo-nautilus-2mgl5
Dec 28 22:10:08.077: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 28 22:10:08.077: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 28 22:10:08.077: INFO: update-demo-nautilus-2mgl5 is verified up and running
Dec 28 22:10:08.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tpl6d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7788'
Dec 28 22:10:08.159: INFO: stderr: ""
Dec 28 22:10:08.159: INFO: stdout: "true"
Dec 28 22:10:08.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tpl6d -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7788'
Dec 28 22:10:08.293: INFO: stderr: ""
Dec 28 22:10:08.293: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 28 22:10:08.293: INFO: validating pod update-demo-nautilus-tpl6d
Dec 28 22:10:08.300: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 28 22:10:08.300: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 28 22:10:08.300: INFO: update-demo-nautilus-tpl6d is verified up and running
STEP: scaling down the replication controller
Dec 28 22:10:08.303: INFO: scanned /root for discovery docs: 
Dec 28 22:10:08.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-7788'
Dec 28 22:10:09.700: INFO: stderr: ""
Dec 28 22:10:09.700: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 28 22:10:09.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7788'
Dec 28 22:10:09.889: INFO: stderr: ""
Dec 28 22:10:09.890: INFO: stdout: "update-demo-nautilus-2mgl5 update-demo-nautilus-tpl6d "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 28 22:10:14.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7788'
Dec 28 22:10:15.012: INFO: stderr: ""
Dec 28 22:10:15.012: INFO: stdout: "update-demo-nautilus-2mgl5 update-demo-nautilus-tpl6d "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 28 22:10:20.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7788'
Dec 28 22:10:20.230: INFO: stderr: ""
Dec 28 22:10:20.230: INFO: stdout: "update-demo-nautilus-tpl6d "
Dec 28 22:10:20.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tpl6d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7788'
Dec 28 22:10:20.555: INFO: stderr: ""
Dec 28 22:10:20.555: INFO: stdout: "true"
Dec 28 22:10:20.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tpl6d -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7788'
Dec 28 22:10:20.738: INFO: stderr: ""
Dec 28 22:10:20.738: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 28 22:10:20.738: INFO: validating pod update-demo-nautilus-tpl6d
Dec 28 22:10:20.766: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 28 22:10:20.766: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 28 22:10:20.766: INFO: update-demo-nautilus-tpl6d is verified up and running
STEP: scaling up the replication controller
Dec 28 22:10:20.769: INFO: scanned /root for discovery docs: 
Dec 28 22:10:20.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-7788'
Dec 28 22:10:21.964: INFO: stderr: ""
Dec 28 22:10:21.964: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 28 22:10:21.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7788'
Dec 28 22:10:22.229: INFO: stderr: ""
Dec 28 22:10:22.230: INFO: stdout: "update-demo-nautilus-2gzx6 update-demo-nautilus-tpl6d "
Dec 28 22:10:22.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2gzx6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7788'
Dec 28 22:10:22.385: INFO: stderr: ""
Dec 28 22:10:22.385: INFO: stdout: ""
Dec 28 22:10:22.385: INFO: update-demo-nautilus-2gzx6 is created but not running
Dec 28 22:10:27.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7788'
Dec 28 22:10:27.656: INFO: stderr: ""
Dec 28 22:10:27.656: INFO: stdout: "update-demo-nautilus-2gzx6 update-demo-nautilus-tpl6d "
Dec 28 22:10:27.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2gzx6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7788'
Dec 28 22:10:27.789: INFO: stderr: ""
Dec 28 22:10:27.789: INFO: stdout: "true"
Dec 28 22:10:27.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2gzx6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7788'
Dec 28 22:10:27.939: INFO: stderr: ""
Dec 28 22:10:27.939: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 28 22:10:27.939: INFO: validating pod update-demo-nautilus-2gzx6
Dec 28 22:10:27.945: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 28 22:10:27.945: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 28 22:10:27.945: INFO: update-demo-nautilus-2gzx6 is verified up and running
Dec 28 22:10:27.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tpl6d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7788'
Dec 28 22:10:28.062: INFO: stderr: ""
Dec 28 22:10:28.062: INFO: stdout: "true"
Dec 28 22:10:28.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tpl6d -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7788'
Dec 28 22:10:28.180: INFO: stderr: ""
Dec 28 22:10:28.181: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 28 22:10:28.181: INFO: validating pod update-demo-nautilus-tpl6d
Dec 28 22:10:28.186: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 28 22:10:28.186: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 28 22:10:28.186: INFO: update-demo-nautilus-tpl6d is verified up and running
STEP: using delete to clean up resources
Dec 28 22:10:28.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7788'
Dec 28 22:10:28.290: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 28 22:10:28.290: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 28 22:10:28.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7788'
Dec 28 22:10:28.449: INFO: stderr: "No resources found in kubectl-7788 namespace.\n"
Dec 28 22:10:28.449: INFO: stdout: ""
Dec 28 22:10:28.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7788 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 28 22:10:28.662: INFO: stderr: ""
Dec 28 22:10:28.663: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:10:28.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7788" for this suite.

• [SLOW TEST:34.305 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":278,"completed":169,"skipped":2607,"failed":0}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:10:28.734: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Dec 28 22:10:28.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-3083'
Dec 28 22:10:28.946: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 28 22:10:28.947: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created
STEP: confirm that you can get logs from an rc
Dec 28 22:10:29.027: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-wcj8v]
Dec 28 22:10:29.027: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-wcj8v" in namespace "kubectl-3083" to be "running and ready"
Dec 28 22:10:29.029: INFO: Pod "e2e-test-httpd-rc-wcj8v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.232587ms
Dec 28 22:10:31.064: INFO: Pod "e2e-test-httpd-rc-wcj8v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037244142s
Dec 28 22:10:33.072: INFO: Pod "e2e-test-httpd-rc-wcj8v": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044993425s
Dec 28 22:10:35.084: INFO: Pod "e2e-test-httpd-rc-wcj8v": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056681614s
Dec 28 22:10:37.099: INFO: Pod "e2e-test-httpd-rc-wcj8v": Phase="Running", Reason="", readiness=true. Elapsed: 8.071502888s
Dec 28 22:10:37.099: INFO: Pod "e2e-test-httpd-rc-wcj8v" satisfied condition "running and ready"
Dec 28 22:10:37.099: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-wcj8v]
Dec 28 22:10:37.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-3083'
Dec 28 22:10:37.292: INFO: stderr: ""
Dec 28 22:10:37.292: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\n[Sat Dec 28 22:10:35.748673 2019] [mpm_event:notice] [pid 1:tid 140190805687144] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Sat Dec 28 22:10:35.748748 2019] [core:notice] [pid 1:tid 140190805687144] AH00094: Command line: 'httpd -D FOREGROUND'\n"
[AfterEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Dec 28 22:10:37.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-3083'
Dec 28 22:10:37.505: INFO: stderr: ""
Dec 28 22:10:37.506: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:10:37.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3083" for this suite.

• [SLOW TEST:8.790 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1608
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image  [Conformance]","total":278,"completed":170,"skipped":2616,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:10:37.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-88cd8e6b-3695-4e88-8b02-844d16a4f6df in namespace container-probe-5614
Dec 28 22:10:45.605: INFO: Started pod busybox-88cd8e6b-3695-4e88-8b02-844d16a4f6df in namespace container-probe-5614
STEP: checking the pod's current state and verifying that restartCount is present
Dec 28 22:10:45.611: INFO: Initial restart count of pod busybox-88cd8e6b-3695-4e88-8b02-844d16a4f6df is 0
Dec 28 22:11:39.986: INFO: Restart count of pod container-probe-5614/busybox-88cd8e6b-3695-4e88-8b02-844d16a4f6df is now 1 (54.374313956s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:11:40.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5614" for this suite.

• [SLOW TEST:62.551 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":171,"skipped":2646,"failed":0}
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:11:40.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Dec 28 22:11:40.333: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0fbbc8eb-9444-4b3b-b094-de7aeef950d7" in namespace "downward-api-1163" to be "success or failure"
Dec 28 22:11:40.361: INFO: Pod "downwardapi-volume-0fbbc8eb-9444-4b3b-b094-de7aeef950d7": Phase="Pending", Reason="", readiness=false. Elapsed: 27.326519ms
Dec 28 22:11:42.369: INFO: Pod "downwardapi-volume-0fbbc8eb-9444-4b3b-b094-de7aeef950d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03545738s
Dec 28 22:11:44.375: INFO: Pod "downwardapi-volume-0fbbc8eb-9444-4b3b-b094-de7aeef950d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041839962s
Dec 28 22:11:46.383: INFO: Pod "downwardapi-volume-0fbbc8eb-9444-4b3b-b094-de7aeef950d7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049893827s
Dec 28 22:11:48.393: INFO: Pod "downwardapi-volume-0fbbc8eb-9444-4b3b-b094-de7aeef950d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059356876s
STEP: Saw pod success
Dec 28 22:11:48.393: INFO: Pod "downwardapi-volume-0fbbc8eb-9444-4b3b-b094-de7aeef950d7" satisfied condition "success or failure"
Dec 28 22:11:48.401: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-0fbbc8eb-9444-4b3b-b094-de7aeef950d7 container client-container: 
STEP: delete the pod
Dec 28 22:11:48.527: INFO: Waiting for pod downwardapi-volume-0fbbc8eb-9444-4b3b-b094-de7aeef950d7 to disappear
Dec 28 22:11:48.543: INFO: Pod downwardapi-volume-0fbbc8eb-9444-4b3b-b094-de7aeef950d7 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:11:48.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1163" for this suite.

• [SLOW TEST:8.495 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":172,"skipped":2649,"failed":0}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:11:48.573: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Dec 28 22:11:49.219: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Dec 28 22:11:51.267: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167909, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167909, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167909, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167909, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:11:53.279: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167909, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167909, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167909, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167909, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:11:55.872: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167909, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167909, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167909, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167909, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Dec 28 22:11:58.346: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Dec 28 22:11:58.396: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:11:58.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9396" for this suite.
STEP: Destroying namespace "webhook-9396-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.096 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":173,"skipped":2652,"failed":0}
SSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:11:58.670: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Dec 28 22:11:58.796: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:12:16.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7447" for this suite.

• [SLOW TEST:17.971 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":174,"skipped":2656,"failed":0}
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:12:16.641: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Dec 28 22:12:16.738: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e88f8ccf-3a68-4008-bb15-a0c18d1072a6" in namespace "downward-api-7973" to be "success or failure"
Dec 28 22:12:16.783: INFO: Pod "downwardapi-volume-e88f8ccf-3a68-4008-bb15-a0c18d1072a6": Phase="Pending", Reason="", readiness=false. Elapsed: 45.156061ms
Dec 28 22:12:18.789: INFO: Pod "downwardapi-volume-e88f8ccf-3a68-4008-bb15-a0c18d1072a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050905483s
Dec 28 22:12:20.821: INFO: Pod "downwardapi-volume-e88f8ccf-3a68-4008-bb15-a0c18d1072a6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083111828s
Dec 28 22:12:22.831: INFO: Pod "downwardapi-volume-e88f8ccf-3a68-4008-bb15-a0c18d1072a6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092503655s
Dec 28 22:12:24.839: INFO: Pod "downwardapi-volume-e88f8ccf-3a68-4008-bb15-a0c18d1072a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.10125258s
STEP: Saw pod success
Dec 28 22:12:24.840: INFO: Pod "downwardapi-volume-e88f8ccf-3a68-4008-bb15-a0c18d1072a6" satisfied condition "success or failure"
Dec 28 22:12:24.844: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-e88f8ccf-3a68-4008-bb15-a0c18d1072a6 container client-container: 
STEP: delete the pod
Dec 28 22:12:24.954: INFO: Waiting for pod downwardapi-volume-e88f8ccf-3a68-4008-bb15-a0c18d1072a6 to disappear
Dec 28 22:12:24.959: INFO: Pod downwardapi-volume-e88f8ccf-3a68-4008-bb15-a0c18d1072a6 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:12:24.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7973" for this suite.

• [SLOW TEST:8.333 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":175,"skipped":2656,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:12:24.975: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:12:36.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-845" for this suite.

• [SLOW TEST:11.375 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":176,"skipped":2669,"failed":0}
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:12:36.352: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Dec 28 22:12:36.526: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:12:46.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4658" for this suite.

• [SLOW TEST:10.540 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":177,"skipped":2670,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:12:46.895: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Dec 28 22:12:47.614: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created
Dec 28 22:12:49.638: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167967, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167967, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167967, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167967, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:12:51.646: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167967, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167967, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167967, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167967, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:12:53.648: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167967, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167967, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167967, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713167967, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Dec 28 22:12:56.705: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Dec 28 22:12:56.711: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3724-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:12:57.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7472" for this suite.
STEP: Destroying namespace "webhook-7472-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.957 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":178,"skipped":2726,"failed":0}
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:12:57.855: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-451d80d0-732a-4f0e-8c79-75f4608a0e2d
STEP: Creating a pod to test consume secrets
Dec 28 22:12:58.004: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5e2a40d9-ddc0-4ab6-b953-676dd4ef71ea" in namespace "projected-831" to be "success or failure"
Dec 28 22:12:58.024: INFO: Pod "pod-projected-secrets-5e2a40d9-ddc0-4ab6-b953-676dd4ef71ea": Phase="Pending", Reason="", readiness=false. Elapsed: 19.559479ms
Dec 28 22:13:00.030: INFO: Pod "pod-projected-secrets-5e2a40d9-ddc0-4ab6-b953-676dd4ef71ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02515491s
Dec 28 22:13:02.061: INFO: Pod "pod-projected-secrets-5e2a40d9-ddc0-4ab6-b953-676dd4ef71ea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056694329s
Dec 28 22:13:04.083: INFO: Pod "pod-projected-secrets-5e2a40d9-ddc0-4ab6-b953-676dd4ef71ea": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078924357s
Dec 28 22:13:06.093: INFO: Pod "pod-projected-secrets-5e2a40d9-ddc0-4ab6-b953-676dd4ef71ea": Phase="Pending", Reason="", readiness=false. Elapsed: 8.088947881s
Dec 28 22:13:08.102: INFO: Pod "pod-projected-secrets-5e2a40d9-ddc0-4ab6-b953-676dd4ef71ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.097949852s
STEP: Saw pod success
Dec 28 22:13:08.103: INFO: Pod "pod-projected-secrets-5e2a40d9-ddc0-4ab6-b953-676dd4ef71ea" satisfied condition "success or failure"
Dec 28 22:13:08.106: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-5e2a40d9-ddc0-4ab6-b953-676dd4ef71ea container projected-secret-volume-test: 
STEP: delete the pod
Dec 28 22:13:08.172: INFO: Waiting for pod pod-projected-secrets-5e2a40d9-ddc0-4ab6-b953-676dd4ef71ea to disappear
Dec 28 22:13:08.189: INFO: Pod pod-projected-secrets-5e2a40d9-ddc0-4ab6-b953-676dd4ef71ea no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:13:08.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-831" for this suite.

• [SLOW TEST:10.352 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":179,"skipped":2729,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:13:08.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 28 22:13:08.489: INFO: Number of nodes with available pods: 0
Dec 28 22:13:08.489: INFO: Node jerma-node is running more than one daemon pod
Dec 28 22:13:09.801: INFO: Number of nodes with available pods: 0
Dec 28 22:13:09.801: INFO: Node jerma-node is running more than one daemon pod
Dec 28 22:13:10.508: INFO: Number of nodes with available pods: 0
Dec 28 22:13:10.508: INFO: Node jerma-node is running more than one daemon pod
Dec 28 22:13:11.526: INFO: Number of nodes with available pods: 0
Dec 28 22:13:11.527: INFO: Node jerma-node is running more than one daemon pod
Dec 28 22:13:12.501: INFO: Number of nodes with available pods: 0
Dec 28 22:13:12.501: INFO: Node jerma-node is running more than one daemon pod
Dec 28 22:13:13.982: INFO: Number of nodes with available pods: 0
Dec 28 22:13:13.982: INFO: Node jerma-node is running more than one daemon pod
Dec 28 22:13:14.831: INFO: Number of nodes with available pods: 0
Dec 28 22:13:14.831: INFO: Node jerma-node is running more than one daemon pod
Dec 28 22:13:15.548: INFO: Number of nodes with available pods: 0
Dec 28 22:13:15.548: INFO: Node jerma-node is running more than one daemon pod
Dec 28 22:13:16.508: INFO: Number of nodes with available pods: 1
Dec 28 22:13:16.508: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 28 22:13:17.513: INFO: Number of nodes with available pods: 2
Dec 28 22:13:17.513: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Dec 28 22:13:17.639: INFO: Number of nodes with available pods: 1
Dec 28 22:13:17.640: INFO: Node jerma-node is running more than one daemon pod
Dec 28 22:13:18.665: INFO: Number of nodes with available pods: 1
Dec 28 22:13:18.665: INFO: Node jerma-node is running more than one daemon pod
Dec 28 22:13:19.743: INFO: Number of nodes with available pods: 1
Dec 28 22:13:19.744: INFO: Node jerma-node is running more than one daemon pod
Dec 28 22:13:20.654: INFO: Number of nodes with available pods: 1
Dec 28 22:13:20.654: INFO: Node jerma-node is running more than one daemon pod
Dec 28 22:13:21.655: INFO: Number of nodes with available pods: 1
Dec 28 22:13:21.655: INFO: Node jerma-node is running more than one daemon pod
Dec 28 22:13:22.659: INFO: Number of nodes with available pods: 1
Dec 28 22:13:22.659: INFO: Node jerma-node is running more than one daemon pod
Dec 28 22:13:23.685: INFO: Number of nodes with available pods: 1
Dec 28 22:13:23.685: INFO: Node jerma-node is running more than one daemon pod
Dec 28 22:13:24.664: INFO: Number of nodes with available pods: 1
Dec 28 22:13:24.664: INFO: Node jerma-node is running more than one daemon pod
Dec 28 22:13:25.655: INFO: Number of nodes with available pods: 1
Dec 28 22:13:25.655: INFO: Node jerma-node is running more than one daemon pod
Dec 28 22:13:26.846: INFO: Number of nodes with available pods: 1
Dec 28 22:13:26.846: INFO: Node jerma-node is running more than one daemon pod
Dec 28 22:13:27.654: INFO: Number of nodes with available pods: 1
Dec 28 22:13:27.654: INFO: Node jerma-node is running more than one daemon pod
Dec 28 22:13:28.654: INFO: Number of nodes with available pods: 1
Dec 28 22:13:28.654: INFO: Node jerma-node is running more than one daemon pod
Dec 28 22:13:29.650: INFO: Number of nodes with available pods: 1
Dec 28 22:13:29.651: INFO: Node jerma-node is running more than one daemon pod
Dec 28 22:13:30.659: INFO: Number of nodes with available pods: 1
Dec 28 22:13:30.660: INFO: Node jerma-node is running more than one daemon pod
Dec 28 22:13:31.652: INFO: Number of nodes with available pods: 1
Dec 28 22:13:31.652: INFO: Node jerma-node is running more than one daemon pod
Dec 28 22:13:32.649: INFO: Number of nodes with available pods: 1
Dec 28 22:13:32.650: INFO: Node jerma-node is running more than one daemon pod
Dec 28 22:13:33.653: INFO: Number of nodes with available pods: 2
Dec 28 22:13:33.653: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1683, will wait for the garbage collector to delete the pods
Dec 28 22:13:33.728: INFO: Deleting DaemonSet.extensions daemon-set took: 15.839647ms
Dec 28 22:13:34.029: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.175103ms
Dec 28 22:13:46.835: INFO: Number of nodes with available pods: 0
Dec 28 22:13:46.835: INFO: Number of running nodes: 0, number of available pods: 0
Dec 28 22:13:46.837: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1683/daemonsets","resourceVersion":"10437380"},"items":null}

Dec 28 22:13:46.839: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1683/pods","resourceVersion":"10437380"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:13:46.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1683" for this suite.

• [SLOW TEST:38.662 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":180,"skipped":2781,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:13:46.873: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Dec 28 22:13:46.972: INFO: Creating deployment "test-recreate-deployment"
Dec 28 22:13:46.987: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Dec 28 22:13:46.999: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Dec 28 22:13:49.013: INFO: Waiting deployment "test-recreate-deployment" to complete
Dec 28 22:13:49.017: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168027, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168027, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168027, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168027, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:13:51.023: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168027, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168027, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168027, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168027, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:13:53.024: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168027, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168027, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168027, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168027, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:13:55.029: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Dec 28 22:13:55.045: INFO: Updating deployment test-recreate-deployment
Dec 28 22:13:55.045: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Dec 28 22:13:55.451: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-9247 /apis/apps/v1/namespaces/deployment-9247/deployments/test-recreate-deployment 7d4fc465-ed1d-463a-bb65-0a7965990c72 10437458 2 2019-12-28 22:13:46 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0027ac438  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2019-12-28 22:13:55 +0000 UTC,LastTransitionTime:2019-12-28 22:13:55 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2019-12-28 22:13:55 +0000 UTC,LastTransitionTime:2019-12-28 22:13:47 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Dec 28 22:13:55.456: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff  deployment-9247 /apis/apps/v1/namespaces/deployment-9247/replicasets/test-recreate-deployment-5f94c574ff c5375ac3-3602-4a44-acde-15a0b843008c 10437456 1 2019-12-28 22:13:55 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 7d4fc465-ed1d-463a-bb65-0a7965990c72 0xc0027ac8f7 0xc0027ac8f8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0027ac958  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Dec 28 22:13:55.456: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Dec 28 22:13:55.456: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856  deployment-9247 /apis/apps/v1/namespaces/deployment-9247/replicasets/test-recreate-deployment-799c574856 dc2b4782-d01f-4fe8-bdfa-687b61905a2f 10437446 2 2019-12-28 22:13:46 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 7d4fc465-ed1d-463a-bb65-0a7965990c72 0xc0027ac9c7 0xc0027ac9c8}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0027aca38  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Dec 28 22:13:55.464: INFO: Pod "test-recreate-deployment-5f94c574ff-r7762" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-r7762 test-recreate-deployment-5f94c574ff- deployment-9247 /api/v1/namespaces/deployment-9247/pods/test-recreate-deployment-5f94c574ff-r7762 e663cdf7-6b3d-48e2-b7d2-6d1cc9ff7540 10437459 0 2019-12-28 22:13:55 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff c5375ac3-3602-4a44-acde-15a0b843008c 0xc0027ace77 0xc0027ace78}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kmhg4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kmhg4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kmhg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 22:13:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 22:13:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 22:13:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 22:13:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.170,PodIP:,StartTime:2019-12-28 22:13:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:13:55.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9247" for this suite.

• [SLOW TEST:8.629 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":181,"skipped":2817,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:13:55.503: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5169.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5169.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5169.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5169.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5169.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5169.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 28 22:14:09.762: INFO: Unable to read wheezy_udp@PodARecord from pod dns-5169/dns-test-729c7d0c-347e-4018-900f-578280a20207: the server could not find the requested resource (get pods dns-test-729c7d0c-347e-4018-900f-578280a20207)
Dec 28 22:14:09.769: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-5169/dns-test-729c7d0c-347e-4018-900f-578280a20207: the server could not find the requested resource (get pods dns-test-729c7d0c-347e-4018-900f-578280a20207)
Dec 28 22:14:09.775: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-5169.svc.cluster.local from pod dns-5169/dns-test-729c7d0c-347e-4018-900f-578280a20207: the server could not find the requested resource (get pods dns-test-729c7d0c-347e-4018-900f-578280a20207)
Dec 28 22:14:09.781: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-5169/dns-test-729c7d0c-347e-4018-900f-578280a20207: the server could not find the requested resource (get pods dns-test-729c7d0c-347e-4018-900f-578280a20207)
Dec 28 22:14:09.800: INFO: Unable to read jessie_udp@PodARecord from pod dns-5169/dns-test-729c7d0c-347e-4018-900f-578280a20207: the server could not find the requested resource (get pods dns-test-729c7d0c-347e-4018-900f-578280a20207)
Dec 28 22:14:09.813: INFO: Unable to read jessie_tcp@PodARecord from pod dns-5169/dns-test-729c7d0c-347e-4018-900f-578280a20207: the server could not find the requested resource (get pods dns-test-729c7d0c-347e-4018-900f-578280a20207)
Dec 28 22:14:09.813: INFO: Lookups using dns-5169/dns-test-729c7d0c-347e-4018-900f-578280a20207 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-5169.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Dec 28 22:14:14.856: INFO: DNS probes using dns-5169/dns-test-729c7d0c-347e-4018-900f-578280a20207 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:14:14.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5169" for this suite.

• [SLOW TEST:19.548 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":182,"skipped":2827,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:14:15.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-87e625c1-e86d-489a-ba8f-e5ae450feb37
STEP: Creating a pod to test consume configMaps
Dec 28 22:14:15.281: INFO: Waiting up to 5m0s for pod "pod-configmaps-31777868-8c0c-4cf2-a5c2-42c1b5c343d8" in namespace "configmap-5893" to be "success or failure"
Dec 28 22:14:15.319: INFO: Pod "pod-configmaps-31777868-8c0c-4cf2-a5c2-42c1b5c343d8": Phase="Pending", Reason="", readiness=false. Elapsed: 38.182314ms
Dec 28 22:14:17.329: INFO: Pod "pod-configmaps-31777868-8c0c-4cf2-a5c2-42c1b5c343d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047834635s
Dec 28 22:14:19.334: INFO: Pod "pod-configmaps-31777868-8c0c-4cf2-a5c2-42c1b5c343d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053420403s
Dec 28 22:14:21.360: INFO: Pod "pod-configmaps-31777868-8c0c-4cf2-a5c2-42c1b5c343d8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079382811s
Dec 28 22:14:23.526: INFO: Pod "pod-configmaps-31777868-8c0c-4cf2-a5c2-42c1b5c343d8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.244579965s
Dec 28 22:14:25.536: INFO: Pod "pod-configmaps-31777868-8c0c-4cf2-a5c2-42c1b5c343d8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.255093391s
Dec 28 22:14:27.547: INFO: Pod "pod-configmaps-31777868-8c0c-4cf2-a5c2-42c1b5c343d8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.265861369s
Dec 28 22:14:29.573: INFO: Pod "pod-configmaps-31777868-8c0c-4cf2-a5c2-42c1b5c343d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.291965829s
STEP: Saw pod success
Dec 28 22:14:29.573: INFO: Pod "pod-configmaps-31777868-8c0c-4cf2-a5c2-42c1b5c343d8" satisfied condition "success or failure"
Dec 28 22:14:29.593: INFO: Trying to get logs from node jerma-node pod pod-configmaps-31777868-8c0c-4cf2-a5c2-42c1b5c343d8 container configmap-volume-test: 
STEP: delete the pod
Dec 28 22:14:29.721: INFO: Waiting for pod pod-configmaps-31777868-8c0c-4cf2-a5c2-42c1b5c343d8 to disappear
Dec 28 22:14:29.727: INFO: Pod pod-configmaps-31777868-8c0c-4cf2-a5c2-42c1b5c343d8 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:14:29.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5893" for this suite.

• [SLOW TEST:14.741 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":183,"skipped":2859,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:14:29.795: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Dec 28 22:14:30.514: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Dec 28 22:14:32.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168070, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168070, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168070, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168070, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:14:34.575: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168070, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168070, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168070, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168070, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:14:36.571: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168070, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168070, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168070, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168070, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Dec 28 22:14:39.834: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Dec 28 22:14:47.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-3209 to-be-attached-pod -i -c=container1'
Dec 28 22:14:48.094: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:14:48.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3209" for this suite.
STEP: Destroying namespace "webhook-3209-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:18.427 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":184,"skipped":2876,"failed":0}
SSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:14:48.222: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service multi-endpoint-test in namespace services-464
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-464 to expose endpoints map[]
Dec 28 22:14:48.658: INFO: Get endpoints failed (8.556187ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Dec 28 22:14:49.667: INFO: successfully validated that service multi-endpoint-test in namespace services-464 exposes endpoints map[] (1.017522448s elapsed)
STEP: Creating pod pod1 in namespace services-464
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-464 to expose endpoints map[pod1:[100]]
Dec 28 22:14:53.830: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.147878606s elapsed, will retry)
Dec 28 22:14:58.991: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (9.308465914s elapsed, will retry)
Dec 28 22:15:00.013: INFO: successfully validated that service multi-endpoint-test in namespace services-464 exposes endpoints map[pod1:[100]] (10.3301821s elapsed)
STEP: Creating pod pod2 in namespace services-464
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-464 to expose endpoints map[pod1:[100] pod2:[101]]
Dec 28 22:15:04.324: INFO: Unexpected endpoints: found map[23ef7aa8-4fbf-49fb-b9ed-23f16a3a971c:[100]], expected map[pod1:[100] pod2:[101]] (4.300605363s elapsed, will retry)
Dec 28 22:15:07.388: INFO: successfully validated that service multi-endpoint-test in namespace services-464 exposes endpoints map[pod1:[100] pod2:[101]] (7.365270264s elapsed)
STEP: Deleting pod pod1 in namespace services-464
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-464 to expose endpoints map[pod2:[101]]
Dec 28 22:15:07.454: INFO: successfully validated that service multi-endpoint-test in namespace services-464 exposes endpoints map[pod2:[101]] (48.837541ms elapsed)
STEP: Deleting pod pod2 in namespace services-464
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-464 to expose endpoints map[]
Dec 28 22:15:07.536: INFO: successfully validated that service multi-endpoint-test in namespace services-464 exposes endpoints map[] (67.558338ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:15:07.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-464" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:19.374 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":278,"completed":185,"skipped":2879,"failed":0}
S
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:15:07.597: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Dec 28 22:15:17.767: INFO: &Pod{ObjectMeta:{send-events-9932ac2c-9296-472f-8723-85330905ceb4  events-9897 /api/v1/namespaces/events-9897/pods/send-events-9932ac2c-9296-472f-8723-85330905ceb4 1b773280-e928-4200-a555-79dedf2a2a2a 10437801 0 2019-12-28 22:15:07 +0000 UTC   map[name:foo time:729675632] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2qj5r,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2qj5r,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2qj5r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 22:15:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 22:15:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 22:15:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 22:15:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.170,PodIP:10.44.0.1,StartTime:2019-12-28 22:15:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2019-12-28 22:15:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://01ec357e7fc10d50b4bc3842a599513a1ad9b2cb220ee073d2854f75bccb4c48,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Dec 28 22:15:19.781: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Dec 28 22:15:21.791: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:15:21.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-9897" for this suite.

• [SLOW TEST:14.254 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":278,"completed":186,"skipped":2880,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:15:21.856: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:15:22.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5309" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":187,"skipped":2890,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:15:22.161: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Dec 28 22:15:22.358: INFO: Pod name rollover-pod: Found 0 pods out of 1
Dec 28 22:15:27.367: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 28 22:15:29.376: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Dec 28 22:15:31.388: INFO: Creating deployment "test-rollover-deployment"
Dec 28 22:15:31.407: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Dec 28 22:15:33.427: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Dec 28 22:15:33.441: INFO: Ensure that both replica sets have 1 created replica
Dec 28 22:15:33.451: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Dec 28 22:15:33.465: INFO: Updating deployment test-rollover-deployment
Dec 28 22:15:33.465: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Dec 28 22:15:35.500: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Dec 28 22:15:35.510: INFO: Make sure deployment "test-rollover-deployment" is complete
Dec 28 22:15:35.528: INFO: all replica sets need to contain the pod-template-hash label
Dec 28 22:15:35.528: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168131, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168131, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168133, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168131, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:15:37.542: INFO: all replica sets need to contain the pod-template-hash label
Dec 28 22:15:37.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168131, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168131, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168133, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168131, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:15:39.542: INFO: all replica sets need to contain the pod-template-hash label
Dec 28 22:15:39.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168131, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168131, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168133, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168131, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:15:41.553: INFO: all replica sets need to contain the pod-template-hash label
Dec 28 22:15:41.553: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168131, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168131, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168140, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168131, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:15:43.588: INFO: all replica sets need to contain the pod-template-hash label
Dec 28 22:15:43.589: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168131, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168131, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168140, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168131, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:15:45.543: INFO: all replica sets need to contain the pod-template-hash label
Dec 28 22:15:45.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168131, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168131, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168140, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168131, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:15:47.551: INFO: all replica sets need to contain the pod-template-hash label
Dec 28 22:15:47.551: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168131, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168131, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168140, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168131, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:15:49.543: INFO: all replica sets need to contain the pod-template-hash label
Dec 28 22:15:49.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168131, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168131, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168140, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168131, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:15:51.547: INFO: 
Dec 28 22:15:51.547: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Dec 28 22:15:51.556: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-837 /apis/apps/v1/namespaces/deployment-837/deployments/test-rollover-deployment 560b4ac5-b705-4908-ac0d-649bc7510c7c 10437950 2 2019-12-28 22:15:31 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005138778  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2019-12-28 22:15:31 +0000 UTC,LastTransitionTime:2019-12-28 22:15:31 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2019-12-28 22:15:50 +0000 UTC,LastTransitionTime:2019-12-28 22:15:31 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Dec 28 22:15:51.559: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff  deployment-837 /apis/apps/v1/namespaces/deployment-837/replicasets/test-rollover-deployment-574d6dfbff aec7f180-b2d9-411a-ae50-faed004970f0 10437939 2 2019-12-28 22:15:33 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 560b4ac5-b705-4908-ac0d-649bc7510c7c 0xc0052cbe77 0xc0052cbe78}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0052cbee8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Dec 28 22:15:51.559: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Dec 28 22:15:51.559: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-837 /apis/apps/v1/namespaces/deployment-837/replicasets/test-rollover-controller 90566f97-e4c8-4a1c-8715-b02e002aa174 10437949 2 2019-12-28 22:15:22 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 560b4ac5-b705-4908-ac0d-649bc7510c7c 0xc0052cbd8f 0xc0052cbda0}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0052cbe08  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Dec 28 22:15:51.560: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c  deployment-837 /apis/apps/v1/namespaces/deployment-837/replicasets/test-rollover-deployment-f6c94f66c 2c4996ba-f77a-4e40-a1c2-54ed5d415f1c 10437902 2 2019-12-28 22:15:31 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 560b4ac5-b705-4908-ac0d-649bc7510c7c 0xc0052cbf50 0xc0052cbf51}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0052cbfc8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Dec 28 22:15:51.563: INFO: Pod "test-rollover-deployment-574d6dfbff-fxcqt" is available:
&Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-fxcqt test-rollover-deployment-574d6dfbff- deployment-837 /api/v1/namespaces/deployment-837/pods/test-rollover-deployment-574d6dfbff-fxcqt 9ea3e878-b3e9-4817-9cf1-e0a81fc709fa 10437921 0 2019-12-28 22:15:33 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff aec7f180-b2d9-411a-ae50-faed004970f0 0xc004e85d97 0xc004e85d98}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mm2fd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mm2fd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mm2fd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 22:15:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 22:15:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 22:15:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 22:15:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.170,PodIP:10.44.0.3,StartTime:2019-12-28 22:15:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2019-12-28 22:15:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://e1cfdf279e29c7bc7945b9a8757a3aea270d15b37d57f5a2d9aa3e470d373388,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.3,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:15:51.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-837" for this suite.

• [SLOW TEST:29.415 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":188,"skipped":2909,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:15:51.577: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1713
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Dec 28 22:15:51.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-7885'
Dec 28 22:15:51.942: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 28 22:15:51.942: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the deployment e2e-test-httpd-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created
[AfterEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1718
Dec 28 22:15:54.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-7885'
Dec 28 22:15:54.407: INFO: stderr: ""
Dec 28 22:15:54.407: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:15:54.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7885" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image  [Conformance]","total":278,"completed":189,"skipped":2920,"failed":0}
S
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:15:54.423: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service nodeport-test with type=NodePort in namespace services-7639
STEP: creating replication controller nodeport-test in namespace services-7639
I1228 22:15:55.049752       8 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-7639, replica count: 2
I1228 22:15:58.100855       8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 22:16:01.101680       8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 22:16:04.102377       8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 22:16:07.102992       8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 28 22:16:07.103: INFO: Creating new exec pod
Dec 28 22:16:14.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7639 execpodrjwxw -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Dec 28 22:16:14.591: INFO: stderr: "+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n"
Dec 28 22:16:14.591: INFO: stdout: ""
Dec 28 22:16:14.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7639 execpodrjwxw -- /bin/sh -x -c nc -zv -t -w 2 10.96.15.67 80'
Dec 28 22:16:15.006: INFO: stderr: "+ nc -zv -t -w 2 10.96.15.67 80\nConnection to 10.96.15.67 80 port [tcp/http] succeeded!\n"
Dec 28 22:16:15.006: INFO: stdout: ""
Dec 28 22:16:15.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7639 execpodrjwxw -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.170 30945'
Dec 28 22:16:15.321: INFO: stderr: "+ nc -zv -t -w 2 10.96.2.170 30945\nConnection to 10.96.2.170 30945 port [tcp/30945] succeeded!\n"
Dec 28 22:16:15.321: INFO: stdout: ""
Dec 28 22:16:15.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7639 execpodrjwxw -- /bin/sh -x -c nc -zv -t -w 2 10.96.3.35 30945'
Dec 28 22:16:15.735: INFO: stderr: "+ nc -zv -t -w 2 10.96.3.35 30945\nConnection to 10.96.3.35 30945 port [tcp/30945] succeeded!\n"
Dec 28 22:16:15.735: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:16:15.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7639" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:21.350 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":190,"skipped":2921,"failed":0}
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:16:15.773: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Dec 28 22:16:15.857: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6e6cd01b-ef2d-4d80-ad42-5f1e5dab628c" in namespace "downward-api-4083" to be "success or failure"
Dec 28 22:16:15.985: INFO: Pod "downwardapi-volume-6e6cd01b-ef2d-4d80-ad42-5f1e5dab628c": Phase="Pending", Reason="", readiness=false. Elapsed: 126.826197ms
Dec 28 22:16:17.996: INFO: Pod "downwardapi-volume-6e6cd01b-ef2d-4d80-ad42-5f1e5dab628c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138024794s
Dec 28 22:16:20.003: INFO: Pod "downwardapi-volume-6e6cd01b-ef2d-4d80-ad42-5f1e5dab628c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.145553805s
Dec 28 22:16:22.040: INFO: Pod "downwardapi-volume-6e6cd01b-ef2d-4d80-ad42-5f1e5dab628c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.182268075s
Dec 28 22:16:25.081: INFO: Pod "downwardapi-volume-6e6cd01b-ef2d-4d80-ad42-5f1e5dab628c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.223722546s
STEP: Saw pod success
Dec 28 22:16:25.082: INFO: Pod "downwardapi-volume-6e6cd01b-ef2d-4d80-ad42-5f1e5dab628c" satisfied condition "success or failure"
Dec 28 22:16:25.093: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-6e6cd01b-ef2d-4d80-ad42-5f1e5dab628c container client-container: 
STEP: delete the pod
Dec 28 22:16:25.399: INFO: Waiting for pod downwardapi-volume-6e6cd01b-ef2d-4d80-ad42-5f1e5dab628c to disappear
Dec 28 22:16:25.409: INFO: Pod downwardapi-volume-6e6cd01b-ef2d-4d80-ad42-5f1e5dab628c no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:16:25.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4083" for this suite.

• [SLOW TEST:9.671 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":191,"skipped":2921,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:16:25.445: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Dec 28 22:16:36.273: INFO: Successfully updated pod "annotationupdatef6746dbe-2916-41d4-b1df-051c820e0f11"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:16:38.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6710" for this suite.

• [SLOW TEST:12.895 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":2935,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:16:38.340: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod test-webserver-72f9fa52-d8db-4f3c-ac6e-c19dd449ca42 in namespace container-probe-3725
Dec 28 22:16:46.510: INFO: Started pod test-webserver-72f9fa52-d8db-4f3c-ac6e-c19dd449ca42 in namespace container-probe-3725
STEP: checking the pod's current state and verifying that restartCount is present
Dec 28 22:16:46.515: INFO: Initial restart count of pod test-webserver-72f9fa52-d8db-4f3c-ac6e-c19dd449ca42 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:20:48.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3725" for this suite.

• [SLOW TEST:250.340 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":193,"skipped":2947,"failed":0}
SS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:20:48.681: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:21:02.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-753" for this suite.

• [SLOW TEST:13.393 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":194,"skipped":2949,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:21:02.074: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Dec 28 22:21:02.673: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Dec 28 22:21:04.693: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168462, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168462, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168462, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168462, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:21:06.702: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168462, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168462, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168462, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168462, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:21:08.698: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168462, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168462, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168462, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168462, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Dec 28 22:21:11.753: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Dec 28 22:21:11.764: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:21:12.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-736" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:11.035 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":195,"skipped":2955,"failed":0}
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:21:13.110: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Dec 28 22:21:13.203: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-ff407b3e-09fe-45d3-ae90-4f7b7980d3a5" in namespace "security-context-test-5378" to be "success or failure"
Dec 28 22:21:13.209: INFO: Pod "busybox-readonly-false-ff407b3e-09fe-45d3-ae90-4f7b7980d3a5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070064ms
Dec 28 22:21:15.220: INFO: Pod "busybox-readonly-false-ff407b3e-09fe-45d3-ae90-4f7b7980d3a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016899852s
Dec 28 22:21:17.628: INFO: Pod "busybox-readonly-false-ff407b3e-09fe-45d3-ae90-4f7b7980d3a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.425614544s
Dec 28 22:21:19.641: INFO: Pod "busybox-readonly-false-ff407b3e-09fe-45d3-ae90-4f7b7980d3a5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.437812211s
Dec 28 22:21:21.663: INFO: Pod "busybox-readonly-false-ff407b3e-09fe-45d3-ae90-4f7b7980d3a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.459734843s
Dec 28 22:21:21.663: INFO: Pod "busybox-readonly-false-ff407b3e-09fe-45d3-ae90-4f7b7980d3a5" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:21:21.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5378" for this suite.

• [SLOW TEST:8.590 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a pod with readOnlyRootFilesystem
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:164
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":196,"skipped":2955,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:21:21.701: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating cluster-info
Dec 28 22:21:21.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Dec 28 22:21:24.141: INFO: stderr: ""
Dec 28 22:21:24.141: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.186:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.186:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:21:24.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7995" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":278,"completed":197,"skipped":2979,"failed":0}
SSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:21:24.156: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Dec 28 22:21:24.292: INFO: Create a RollingUpdate DaemonSet
Dec 28 22:21:24.302: INFO: Check that daemon pods launch on every node of the cluster
Dec 28 22:21:24.316: INFO: Number of nodes with available pods: 0
Dec 28 22:21:24.316: INFO: Node jerma-node is running more than one daemon pod
Dec 28 22:21:25.497: INFO: Number of nodes with available pods: 0
Dec 28 22:21:25.497: INFO: Node jerma-node is running more than one daemon pod
Dec 28 22:21:26.477: INFO: Number of nodes with available pods: 0
Dec 28 22:21:26.477: INFO: Node jerma-node is running more than one daemon pod
Dec 28 22:21:27.338: INFO: Number of nodes with available pods: 0
Dec 28 22:21:27.338: INFO: Node jerma-node is running more than one daemon pod
Dec 28 22:21:28.332: INFO: Number of nodes with available pods: 0
Dec 28 22:21:28.332: INFO: Node jerma-node is running more than one daemon pod
Dec 28 22:21:29.577: INFO: Number of nodes with available pods: 0
Dec 28 22:21:29.577: INFO: Node jerma-node is running more than one daemon pod
Dec 28 22:21:30.719: INFO: Number of nodes with available pods: 0
Dec 28 22:21:30.719: INFO: Node jerma-node is running more than one daemon pod
Dec 28 22:21:31.373: INFO: Number of nodes with available pods: 0
Dec 28 22:21:31.373: INFO: Node jerma-node is running more than one daemon pod
Dec 28 22:21:32.336: INFO: Number of nodes with available pods: 2
Dec 28 22:21:32.337: INFO: Number of running nodes: 2, number of available pods: 2
Dec 28 22:21:32.337: INFO: Update the DaemonSet to trigger a rollout
Dec 28 22:21:32.422: INFO: Updating DaemonSet daemon-set
Dec 28 22:21:47.486: INFO: Roll back the DaemonSet before rollout is complete
Dec 28 22:21:47.505: INFO: Updating DaemonSet daemon-set
Dec 28 22:21:47.506: INFO: Make sure DaemonSet rollback is complete
Dec 28 22:21:47.524: INFO: Wrong image for pod: daemon-set-6gskg. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Dec 28 22:21:47.524: INFO: Pod daemon-set-6gskg is not available
Dec 28 22:21:48.550: INFO: Wrong image for pod: daemon-set-6gskg. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Dec 28 22:21:48.550: INFO: Pod daemon-set-6gskg is not available
Dec 28 22:21:49.538: INFO: Wrong image for pod: daemon-set-6gskg. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Dec 28 22:21:49.538: INFO: Pod daemon-set-6gskg is not available
Dec 28 22:21:50.542: INFO: Wrong image for pod: daemon-set-6gskg. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Dec 28 22:21:50.543: INFO: Pod daemon-set-6gskg is not available
Dec 28 22:21:51.538: INFO: Wrong image for pod: daemon-set-6gskg. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Dec 28 22:21:51.538: INFO: Pod daemon-set-6gskg is not available
Dec 28 22:21:52.546: INFO: Wrong image for pod: daemon-set-6gskg. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Dec 28 22:21:52.546: INFO: Pod daemon-set-6gskg is not available
Dec 28 22:21:53.538: INFO: Pod daemon-set-l4xjx is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6710, will wait for the garbage collector to delete the pods
Dec 28 22:21:53.623: INFO: Deleting DaemonSet.extensions daemon-set took: 18.302629ms
Dec 28 22:21:53.925: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.359767ms
Dec 28 22:21:59.535: INFO: Number of nodes with available pods: 0
Dec 28 22:21:59.535: INFO: Number of running nodes: 0, number of available pods: 0
Dec 28 22:21:59.540: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6710/daemonsets","resourceVersion":"10438867"},"items":null}

Dec 28 22:21:59.543: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6710/pods","resourceVersion":"10438867"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:21:59.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6710" for this suite.

• [SLOW TEST:35.417 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":198,"skipped":2984,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:21:59.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Dec 28 22:22:00.066: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Dec 28 22:22:02.096: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168520, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168520, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168520, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168520, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:22:04.108: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168520, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168520, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168520, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168520, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:22:06.213: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168520, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168520, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168520, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713168520, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Dec 28 22:22:09.149: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Dec 28 22:22:09.154: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:22:11.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2161" for this suite.
STEP: Destroying namespace "webhook-2161-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.829 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":199,"skipped":3006,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:22:11.406: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-966304df-53c5-498d-85b7-2fc1f5a743e1
STEP: Creating a pod to test consume secrets
Dec 28 22:22:11.668: INFO: Waiting up to 5m0s for pod "pod-secrets-7f81f164-8731-4ad4-b784-80bf6dd841fd" in namespace "secrets-8535" to be "success or failure"
Dec 28 22:22:11.702: INFO: Pod "pod-secrets-7f81f164-8731-4ad4-b784-80bf6dd841fd": Phase="Pending", Reason="", readiness=false. Elapsed: 33.876385ms
Dec 28 22:22:13.729: INFO: Pod "pod-secrets-7f81f164-8731-4ad4-b784-80bf6dd841fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06086926s
Dec 28 22:22:15.736: INFO: Pod "pod-secrets-7f81f164-8731-4ad4-b784-80bf6dd841fd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06783836s
Dec 28 22:22:17.745: INFO: Pod "pod-secrets-7f81f164-8731-4ad4-b784-80bf6dd841fd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076960545s
Dec 28 22:22:19.754: INFO: Pod "pod-secrets-7f81f164-8731-4ad4-b784-80bf6dd841fd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.086186708s
Dec 28 22:22:21.775: INFO: Pod "pod-secrets-7f81f164-8731-4ad4-b784-80bf6dd841fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.107099275s
STEP: Saw pod success
Dec 28 22:22:21.775: INFO: Pod "pod-secrets-7f81f164-8731-4ad4-b784-80bf6dd841fd" satisfied condition "success or failure"
Dec 28 22:22:21.785: INFO: Trying to get logs from node jerma-node pod pod-secrets-7f81f164-8731-4ad4-b784-80bf6dd841fd container secret-volume-test: 
STEP: delete the pod
Dec 28 22:22:21.899: INFO: Waiting for pod pod-secrets-7f81f164-8731-4ad4-b784-80bf6dd841fd to disappear
Dec 28 22:22:21.912: INFO: Pod pod-secrets-7f81f164-8731-4ad4-b784-80bf6dd841fd no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:22:21.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8535" for this suite.

• [SLOW TEST:10.519 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":200,"skipped":3054,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:22:21.927: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-1837
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-1837
STEP: Creating statefulset with conflicting port in namespace statefulset-1837
STEP: Waiting until pod test-pod will start running in namespace statefulset-1837
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1837
Dec 28 22:22:32.193: INFO: Observed stateful pod in namespace: statefulset-1837, name: ss-0, uid: 501b5837-f511-41a7-bb57-06154b60822d, status phase: Pending. Waiting for statefulset controller to delete.
Dec 28 22:22:36.764: INFO: Observed stateful pod in namespace: statefulset-1837, name: ss-0, uid: 501b5837-f511-41a7-bb57-06154b60822d, status phase: Failed. Waiting for statefulset controller to delete.
Dec 28 22:22:36.776: INFO: Observed stateful pod in namespace: statefulset-1837, name: ss-0, uid: 501b5837-f511-41a7-bb57-06154b60822d, status phase: Failed. Waiting for statefulset controller to delete.
Dec 28 22:22:36.789: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1837
STEP: Removing pod with conflicting port in namespace statefulset-1837
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1837 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Dec 28 22:22:47.074: INFO: Deleting all statefulset in ns statefulset-1837
Dec 28 22:22:47.083: INFO: Scaling statefulset ss to 0
Dec 28 22:22:57.159: INFO: Waiting for statefulset status.replicas updated to 0
Dec 28 22:22:57.164: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:22:57.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1837" for this suite.

• [SLOW TEST:35.306 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":201,"skipped":3065,"failed":0}
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:22:57.233: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:23:05.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4779" for this suite.

• [SLOW TEST:8.235 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":202,"skipped":3069,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:23:05.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-797a4134-9007-45e1-973f-ea7717033cfa in namespace container-probe-3376
Dec 28 22:23:11.708: INFO: Started pod busybox-797a4134-9007-45e1-973f-ea7717033cfa in namespace container-probe-3376
STEP: checking the pod's current state and verifying that restartCount is present
Dec 28 22:23:11.717: INFO: Initial restart count of pod busybox-797a4134-9007-45e1-973f-ea7717033cfa is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:27:13.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3376" for this suite.

• [SLOW TEST:247.768 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3076,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:27:13.238: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-secret-68hk
STEP: Creating a pod to test atomic-volume-subpath
Dec 28 22:27:13.423: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-68hk" in namespace "subpath-8211" to be "success or failure"
Dec 28 22:27:13.441: INFO: Pod "pod-subpath-test-secret-68hk": Phase="Pending", Reason="", readiness=false. Elapsed: 18.542788ms
Dec 28 22:27:15.456: INFO: Pod "pod-subpath-test-secret-68hk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03273731s
Dec 28 22:27:17.464: INFO: Pod "pod-subpath-test-secret-68hk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040631715s
Dec 28 22:27:19.472: INFO: Pod "pod-subpath-test-secret-68hk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049552071s
Dec 28 22:27:21.479: INFO: Pod "pod-subpath-test-secret-68hk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056006937s
Dec 28 22:27:23.488: INFO: Pod "pod-subpath-test-secret-68hk": Phase="Running", Reason="", readiness=true. Elapsed: 10.065485644s
Dec 28 22:27:25.497: INFO: Pod "pod-subpath-test-secret-68hk": Phase="Running", Reason="", readiness=true. Elapsed: 12.073885254s
Dec 28 22:27:27.507: INFO: Pod "pod-subpath-test-secret-68hk": Phase="Running", Reason="", readiness=true. Elapsed: 14.083890983s
Dec 28 22:27:29.516: INFO: Pod "pod-subpath-test-secret-68hk": Phase="Running", Reason="", readiness=true. Elapsed: 16.093101782s
Dec 28 22:27:31.524: INFO: Pod "pod-subpath-test-secret-68hk": Phase="Running", Reason="", readiness=true. Elapsed: 18.101377078s
Dec 28 22:27:33.532: INFO: Pod "pod-subpath-test-secret-68hk": Phase="Running", Reason="", readiness=true. Elapsed: 20.108989003s
Dec 28 22:27:35.545: INFO: Pod "pod-subpath-test-secret-68hk": Phase="Running", Reason="", readiness=true. Elapsed: 22.121800069s
Dec 28 22:27:37.551: INFO: Pod "pod-subpath-test-secret-68hk": Phase="Running", Reason="", readiness=true. Elapsed: 24.128353959s
Dec 28 22:27:39.559: INFO: Pod "pod-subpath-test-secret-68hk": Phase="Running", Reason="", readiness=true. Elapsed: 26.136153847s
Dec 28 22:27:41.570: INFO: Pod "pod-subpath-test-secret-68hk": Phase="Running", Reason="", readiness=true. Elapsed: 28.146779654s
Dec 28 22:27:43.581: INFO: Pod "pod-subpath-test-secret-68hk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.157682785s
STEP: Saw pod success
Dec 28 22:27:43.581: INFO: Pod "pod-subpath-test-secret-68hk" satisfied condition "success or failure"
Dec 28 22:27:43.587: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-secret-68hk container test-container-subpath-secret-68hk: 
STEP: delete the pod
Dec 28 22:27:43.718: INFO: Waiting for pod pod-subpath-test-secret-68hk to disappear
Dec 28 22:27:43.723: INFO: Pod pod-subpath-test-secret-68hk no longer exists
STEP: Deleting pod pod-subpath-test-secret-68hk
Dec 28 22:27:43.723: INFO: Deleting pod "pod-subpath-test-secret-68hk" in namespace "subpath-8211"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:27:43.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8211" for this suite.

• [SLOW TEST:30.505 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":204,"skipped":3116,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:27:43.745: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1444
STEP: creating an pod
Dec 28 22:27:43.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-2235 -- logs-generator --log-lines-total 100 --run-duration 20s'
Dec 28 22:27:44.093: INFO: stderr: ""
Dec 28 22:27:44.093: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Waiting for log generator to start.
Dec 28 22:27:44.093: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Dec 28 22:27:44.094: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-2235" to be "running and ready, or succeeded"
Dec 28 22:27:44.132: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 37.554902ms
Dec 28 22:27:46.142: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048455305s
Dec 28 22:27:48.151: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057207393s
Dec 28 22:27:50.160: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066273275s
Dec 28 22:27:52.171: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 8.076915905s
Dec 28 22:27:52.171: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Dec 28 22:27:52.171: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Dec 28 22:27:52.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2235'
Dec 28 22:27:52.356: INFO: stderr: ""
Dec 28 22:27:52.356: INFO: stdout: "I1228 22:27:50.844509       1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/6ww 502\nI1228 22:27:51.044837       1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/7tr 419\nI1228 22:27:51.244786       1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/w7dr 240\nI1228 22:27:51.444980       1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/qkvq 450\nI1228 22:27:51.644805       1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/kh6 222\nI1228 22:27:51.844829       1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/ngb5 444\nI1228 22:27:52.044702       1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/v9ll 523\nI1228 22:27:52.244836       1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/p4bk 347\n"
STEP: limiting log lines
Dec 28 22:27:52.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2235 --tail=1'
Dec 28 22:27:52.515: INFO: stderr: ""
Dec 28 22:27:52.515: INFO: stdout: "I1228 22:27:52.444797       1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/c978 377\n"
Dec 28 22:27:52.515: INFO: got output "I1228 22:27:52.444797       1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/c978 377\n"
STEP: limiting log bytes
Dec 28 22:27:52.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2235 --limit-bytes=1'
Dec 28 22:27:52.669: INFO: stderr: ""
Dec 28 22:27:52.669: INFO: stdout: "I"
Dec 28 22:27:52.670: INFO: got output "I"
STEP: exposing timestamps
Dec 28 22:27:52.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2235 --tail=1 --timestamps'
Dec 28 22:27:52.828: INFO: stderr: ""
Dec 28 22:27:52.828: INFO: stdout: "2019-12-28T22:27:52.645205723Z I1228 22:27:52.644704       1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/8mzs 243\n"
Dec 28 22:27:52.829: INFO: got output "2019-12-28T22:27:52.645205723Z I1228 22:27:52.644704       1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/8mzs 243\n"
STEP: restricting to a time range
Dec 28 22:27:55.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2235 --since=1s'
Dec 28 22:27:55.519: INFO: stderr: ""
Dec 28 22:27:55.519: INFO: stdout: "I1228 22:27:54.644742       1 logs_generator.go:76] 19 GET /api/v1/namespaces/ns/pods/m657 473\nI1228 22:27:54.845004       1 logs_generator.go:76] 20 POST /api/v1/namespaces/kube-system/pods/7hxn 261\nI1228 22:27:55.044775       1 logs_generator.go:76] 21 PUT /api/v1/namespaces/kube-system/pods/rqr 413\nI1228 22:27:55.244779       1 logs_generator.go:76] 22 GET /api/v1/namespaces/default/pods/hd2s 285\nI1228 22:27:55.444820       1 logs_generator.go:76] 23 PUT /api/v1/namespaces/default/pods/92d 480\n"
Dec 28 22:27:55.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2235 --since=24h'
Dec 28 22:27:55.663: INFO: stderr: ""
Dec 28 22:27:55.663: INFO: stdout: "I1228 22:27:50.844509       1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/6ww 502\nI1228 22:27:51.044837       1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/7tr 419\nI1228 22:27:51.244786       1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/w7dr 240\nI1228 22:27:51.444980       1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/qkvq 450\nI1228 22:27:51.644805       1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/kh6 222\nI1228 22:27:51.844829       1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/ngb5 444\nI1228 22:27:52.044702       1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/v9ll 523\nI1228 22:27:52.244836       1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/p4bk 347\nI1228 22:27:52.444797       1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/c978 377\nI1228 22:27:52.644704       1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/8mzs 243\nI1228 22:27:52.844773       1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/7bgn 481\nI1228 22:27:53.044699       1 logs_generator.go:76] 11 PUT /api/v1/namespaces/kube-system/pods/8pk 517\nI1228 22:27:53.244675       1 logs_generator.go:76] 12 POST /api/v1/namespaces/default/pods/7kdr 340\nI1228 22:27:53.444780       1 logs_generator.go:76] 13 POST /api/v1/namespaces/default/pods/ngl 254\nI1228 22:27:53.644828       1 logs_generator.go:76] 14 GET /api/v1/namespaces/ns/pods/lmc8 503\nI1228 22:27:53.844850       1 logs_generator.go:76] 15 POST /api/v1/namespaces/ns/pods/qzj 276\nI1228 22:27:54.044930       1 logs_generator.go:76] 16 POST /api/v1/namespaces/default/pods/pnx 243\nI1228 22:27:54.244804       1 logs_generator.go:76] 17 GET /api/v1/namespaces/default/pods/wwxk 563\nI1228 22:27:54.444854       1 logs_generator.go:76] 18 POST /api/v1/namespaces/ns/pods/728 360\nI1228 22:27:54.644742       1 logs_generator.go:76] 19 GET /api/v1/namespaces/ns/pods/m657 473\nI1228 22:27:54.845004       1 logs_generator.go:76] 20 POST /api/v1/namespaces/kube-system/pods/7hxn 261\nI1228 22:27:55.044775       1 logs_generator.go:76] 21 PUT /api/v1/namespaces/kube-system/pods/rqr 413\nI1228 22:27:55.244779       1 logs_generator.go:76] 22 GET /api/v1/namespaces/default/pods/hd2s 285\nI1228 22:27:55.444820       1 logs_generator.go:76] 23 PUT /api/v1/namespaces/default/pods/92d 480\nI1228 22:27:55.644800       1 logs_generator.go:76] 24 GET /api/v1/namespaces/kube-system/pods/qr6f 277\n"
[AfterEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450
Dec 28 22:27:55.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-2235'
Dec 28 22:28:06.693: INFO: stderr: ""
Dec 28 22:28:06.693: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:28:06.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2235" for this suite.

• [SLOW TEST:22.961 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1440
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":278,"completed":205,"skipped":3135,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:28:06.706: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1672
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Dec 28 22:28:06.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-5508'
Dec 28 22:28:07.070: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 28 22:28:07.071: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
Dec 28 22:28:07.099: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Dec 28 22:28:07.146: INFO: scanned /root for discovery docs: 
Dec 28 22:28:07.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-5508'
Dec 28 22:28:28.360: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 28 22:28:28.361: INFO: stdout: "Created e2e-test-httpd-rc-7b05fb05f1ed015cdd3b4da2a9d585c6\nScaling up e2e-test-httpd-rc-7b05fb05f1ed015cdd3b4da2a9d585c6 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-7b05fb05f1ed015cdd3b4da2a9d585c6 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-7b05fb05f1ed015cdd3b4da2a9d585c6 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
Dec 28 22:28:28.361: INFO: stdout: "Created e2e-test-httpd-rc-7b05fb05f1ed015cdd3b4da2a9d585c6\nScaling up e2e-test-httpd-rc-7b05fb05f1ed015cdd3b4da2a9d585c6 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-7b05fb05f1ed015cdd3b4da2a9d585c6 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-7b05fb05f1ed015cdd3b4da2a9d585c6 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up.
Dec 28 22:28:28.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-5508'
Dec 28 22:28:28.523: INFO: stderr: ""
Dec 28 22:28:28.524: INFO: stdout: "e2e-test-httpd-rc-7b05fb05f1ed015cdd3b4da2a9d585c6-h5g5q "
Dec 28 22:28:28.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-7b05fb05f1ed015cdd3b4da2a9d585c6-h5g5q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5508'
Dec 28 22:28:28.642: INFO: stderr: ""
Dec 28 22:28:28.642: INFO: stdout: "true"
Dec 28 22:28:28.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-7b05fb05f1ed015cdd3b4da2a9d585c6-h5g5q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5508'
Dec 28 22:28:28.724: INFO: stderr: ""
Dec 28 22:28:28.724: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine"
Dec 28 22:28:28.724: INFO: e2e-test-httpd-rc-7b05fb05f1ed015cdd3b4da2a9d585c6-h5g5q is verified up and running
[AfterEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1678
Dec 28 22:28:28.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-5508'
Dec 28 22:28:28.867: INFO: stderr: ""
Dec 28 22:28:28.867: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:28:28.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5508" for this suite.

• [SLOW TEST:22.190 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1667
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image  [Conformance]","total":278,"completed":206,"skipped":3146,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:28:28.897: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8836.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8836.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8836.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8836.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-8836.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8836.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 28 22:28:41.450: INFO: Unable to read wheezy_udp@PodARecord from pod dns-8836/dns-test-e8c3c6be-e228-4c17-9dec-ba210c1edd46: the server could not find the requested resource (get pods dns-test-e8c3c6be-e228-4c17-9dec-ba210c1edd46)
Dec 28 22:28:41.454: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-8836/dns-test-e8c3c6be-e228-4c17-9dec-ba210c1edd46: the server could not find the requested resource (get pods dns-test-e8c3c6be-e228-4c17-9dec-ba210c1edd46)
Dec 28 22:28:41.459: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-8836.svc.cluster.local from pod dns-8836/dns-test-e8c3c6be-e228-4c17-9dec-ba210c1edd46: the server could not find the requested resource (get pods dns-test-e8c3c6be-e228-4c17-9dec-ba210c1edd46)
Dec 28 22:28:41.469: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-8836/dns-test-e8c3c6be-e228-4c17-9dec-ba210c1edd46: the server could not find the requested resource (get pods dns-test-e8c3c6be-e228-4c17-9dec-ba210c1edd46)
Dec 28 22:28:41.474: INFO: Unable to read jessie_udp@PodARecord from pod dns-8836/dns-test-e8c3c6be-e228-4c17-9dec-ba210c1edd46: the server could not find the requested resource (get pods dns-test-e8c3c6be-e228-4c17-9dec-ba210c1edd46)
Dec 28 22:28:41.479: INFO: Unable to read jessie_tcp@PodARecord from pod dns-8836/dns-test-e8c3c6be-e228-4c17-9dec-ba210c1edd46: the server could not find the requested resource (get pods dns-test-e8c3c6be-e228-4c17-9dec-ba210c1edd46)
Dec 28 22:28:41.479: INFO: Lookups using dns-8836/dns-test-e8c3c6be-e228-4c17-9dec-ba210c1edd46 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-2.dns-test-service-2.dns-8836.svc.cluster.local jessie_hosts@dns-querier-2 jessie_udp@PodARecord jessie_tcp@PodARecord]

Dec 28 22:28:46.543: INFO: DNS probes using dns-8836/dns-test-e8c3c6be-e228-4c17-9dec-ba210c1edd46 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:28:46.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8836" for this suite.

• [SLOW TEST:17.876 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":207,"skipped":3174,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:28:46.773: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:28:58.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2824" for this suite.

• [SLOW TEST:12.150 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3188,"failed":0}
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:28:58.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W1228 22:28:59.837056       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 28 22:28:59.837: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:28:59.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9338" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":209,"skipped":3190,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:28:59.848: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:29:00.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-286" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":210,"skipped":3198,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:29:00.462: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating all guestbook components
Dec 28 22:29:00.626: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Dec 28 22:29:00.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9651'
Dec 28 22:29:01.116: INFO: stderr: ""
Dec 28 22:29:01.116: INFO: stdout: "service/agnhost-slave created\n"
Dec 28 22:29:01.117: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Dec 28 22:29:01.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9651'
Dec 28 22:29:01.775: INFO: stderr: ""
Dec 28 22:29:01.775: INFO: stdout: "service/agnhost-master created\n"
Dec 28 22:29:01.776: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Dec 28 22:29:01.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9651'
Dec 28 22:29:02.483: INFO: stderr: ""
Dec 28 22:29:02.484: INFO: stdout: "service/frontend created\n"
Dec 28 22:29:02.486: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Dec 28 22:29:02.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9651'
Dec 28 22:29:03.046: INFO: stderr: ""
Dec 28 22:29:03.046: INFO: stdout: "deployment.apps/frontend created\n"
Dec 28 22:29:03.046: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Dec 28 22:29:03.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9651'
Dec 28 22:29:03.485: INFO: stderr: ""
Dec 28 22:29:03.485: INFO: stdout: "deployment.apps/agnhost-master created\n"
Dec 28 22:29:03.486: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Dec 28 22:29:03.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9651'
Dec 28 22:29:04.076: INFO: stderr: ""
Dec 28 22:29:04.077: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Dec 28 22:29:04.077: INFO: Waiting for all frontend pods to be Running.
Dec 28 22:29:24.129: INFO: Waiting for frontend to serve content.
Dec 28 22:29:24.155: INFO: Trying to add a new entry to the guestbook.
Dec 28 22:29:24.171: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Dec 28 22:29:29.196: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Dec 28 22:29:34.210: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Dec 28 22:29:39.252: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Dec 28 22:29:44.269: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Dec 28 22:29:49.310: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Dec 28 22:29:54.332: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Dec 28 22:29:59.464: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Dec 28 22:30:04.487: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Dec 28 22:30:09.505: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Dec 28 22:30:14.535: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Dec 28 22:30:19.555: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Dec 28 22:30:24.575: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Dec 28 22:30:29.591: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Dec 28 22:30:34.621: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Dec 28 22:30:39.644: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Dec 28 22:30:44.674: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Dec 28 22:30:49.712: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Dec 28 22:30:54.730: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Dec 28 22:30:59.756: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Dec 28 22:31:04.804: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Dec 28 22:31:09.832: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Dec 28 22:31:14.847: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Dec 28 22:31:19.886: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Dec 28 22:31:24.909: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Dec 28 22:31:29.936: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Dec 28 22:31:34.952: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Dec 28 22:31:39.976: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Dec 28 22:31:44.991: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Dec 28 22:31:50.009: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Dec 28 22:31:55.100: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Dec 28 22:32:00.120: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Dec 28 22:32:05.138: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Dec 28 22:32:10.164: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Dec 28 22:32:15.179: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Dec 28 22:32:20.199: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Dec 28 22:32:25.200: FAIL: Cannot added new entry in 180 seconds.

Full Stack Trace
k8s.io/kubernetes/test/e2e/kubectl.validateGuestbookApp(0x5424e60, 0xc001595760, 0xc0051a3120, 0xc)
	/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2315 +0x551
k8s.io/kubernetes/test/e2e/kubectl.glob..func2.7.2()
	/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:417 +0x165
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0003e5700)
	_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:110 +0x30a
k8s.io/kubernetes/test/e2e.TestE2E(0xc0003e5700)
	_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:112 +0x2b
testing.tRunner(0xc0003e5700, 0x4c30de8)
	/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:960 +0x350
STEP: using delete to clean up resources
Dec 28 22:32:25.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9651'
Dec 28 22:32:28.738: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 28 22:32:28.738: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Dec 28 22:32:28.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9651'
Dec 28 22:32:29.024: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 28 22:32:29.024: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 28 22:32:29.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9651'
Dec 28 22:32:29.218: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 28 22:32:29.218: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 28 22:32:29.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9651'
Dec 28 22:32:29.340: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 28 22:32:29.340: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 28 22:32:29.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9651'
Dec 28 22:32:29.517: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 28 22:32:29.517: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 28 22:32:29.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9651'
Dec 28 22:32:29.736: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 28 22:32:29.737: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
STEP: Collecting events from namespace "kubectl-9651".
STEP: Found 37 events.
Dec 28 22:32:29.784: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-master-74c46fb7d4-l4p5n: {default-scheduler } Scheduled: Successfully assigned kubectl-9651/agnhost-master-74c46fb7d4-l4p5n to jerma-node
Dec 28 22:32:29.784: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-slave-774cfc759f-c9tqq: {default-scheduler } Scheduled: Successfully assigned kubectl-9651/agnhost-slave-774cfc759f-c9tqq to jerma-server-4b75xjbddvit
Dec 28 22:32:29.784: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-slave-774cfc759f-f5hhs: {default-scheduler } Scheduled: Successfully assigned kubectl-9651/agnhost-slave-774cfc759f-f5hhs to jerma-node
Dec 28 22:32:29.784: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-6zf8m: {default-scheduler } Scheduled: Successfully assigned kubectl-9651/frontend-6c5f89d5d4-6zf8m to jerma-node
Dec 28 22:32:29.784: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-djqbq: {default-scheduler } Scheduled: Successfully assigned kubectl-9651/frontend-6c5f89d5d4-djqbq to jerma-server-4b75xjbddvit
Dec 28 22:32:29.784: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-wk55m: {default-scheduler } Scheduled: Successfully assigned kubectl-9651/frontend-6c5f89d5d4-wk55m to jerma-node
Dec 28 22:32:29.784: INFO: At 2019-12-28 22:29:03 +0000 UTC - event for agnhost-master: {deployment-controller } ScalingReplicaSet: Scaled up replica set agnhost-master-74c46fb7d4 to 1
Dec 28 22:32:29.784: INFO: At 2019-12-28 22:29:03 +0000 UTC - event for agnhost-master-74c46fb7d4: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-master-74c46fb7d4-l4p5n
Dec 28 22:32:29.784: INFO: At 2019-12-28 22:29:03 +0000 UTC - event for frontend: {deployment-controller } ScalingReplicaSet: Scaled up replica set frontend-6c5f89d5d4 to 3
Dec 28 22:32:29.784: INFO: At 2019-12-28 22:29:03 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-wk55m
Dec 28 22:32:29.784: INFO: At 2019-12-28 22:29:03 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-6zf8m
Dec 28 22:32:29.784: INFO: At 2019-12-28 22:29:03 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-djqbq
Dec 28 22:32:29.784: INFO: At 2019-12-28 22:29:04 +0000 UTC - event for agnhost-slave: {deployment-controller } ScalingReplicaSet: Scaled up replica set agnhost-slave-774cfc759f to 2
Dec 28 22:32:29.784: INFO: At 2019-12-28 22:29:04 +0000 UTC - event for agnhost-slave-774cfc759f: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-slave-774cfc759f-c9tqq
Dec 28 22:32:29.784: INFO: At 2019-12-28 22:29:04 +0000 UTC - event for agnhost-slave-774cfc759f: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-slave-774cfc759f-f5hhs
Dec 28 22:32:29.784: INFO: At 2019-12-28 22:29:08 +0000 UTC - event for agnhost-slave-774cfc759f-c9tqq: {kubelet jerma-server-4b75xjbddvit} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Dec 28 22:32:29.784: INFO: At 2019-12-28 22:29:09 +0000 UTC - event for frontend-6c5f89d5d4-djqbq: {kubelet jerma-server-4b75xjbddvit} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Dec 28 22:32:29.784: INFO: At 2019-12-28 22:29:10 +0000 UTC - event for frontend-6c5f89d5d4-6zf8m: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Dec 28 22:32:29.784: INFO: At 2019-12-28 22:29:12 +0000 UTC - event for agnhost-slave-774cfc759f-c9tqq: {kubelet jerma-server-4b75xjbddvit} Created: Created container slave
Dec 28 22:32:29.784: INFO: At 2019-12-28 22:29:12 +0000 UTC - event for frontend-6c5f89d5d4-djqbq: {kubelet jerma-server-4b75xjbddvit} Created: Created container guestbook-frontend
Dec 28 22:32:29.784: INFO: At 2019-12-28 22:29:13 +0000 UTC - event for agnhost-slave-774cfc759f-c9tqq: {kubelet jerma-server-4b75xjbddvit} Started: Started container slave
Dec 28 22:32:29.784: INFO: At 2019-12-28 22:29:13 +0000 UTC - event for frontend-6c5f89d5d4-djqbq: {kubelet jerma-server-4b75xjbddvit} Started: Started container guestbook-frontend
Dec 28 22:32:29.784: INFO: At 2019-12-28 22:29:13 +0000 UTC - event for frontend-6c5f89d5d4-wk55m: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Dec 28 22:32:29.784: INFO: At 2019-12-28 22:29:15 +0000 UTC - event for frontend-6c5f89d5d4-6zf8m: {kubelet jerma-node} Created: Created container guestbook-frontend
Dec 28 22:32:29.784: INFO: At 2019-12-28 22:29:16 +0000 UTC - event for agnhost-slave-774cfc759f-f5hhs: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Dec 28 22:32:29.785: INFO: At 2019-12-28 22:29:17 +0000 UTC - event for agnhost-master-74c46fb7d4-l4p5n: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Dec 28 22:32:29.785: INFO: At 2019-12-28 22:29:18 +0000 UTC - event for frontend-6c5f89d5d4-6zf8m: {kubelet jerma-node} Started: Started container guestbook-frontend
Dec 28 22:32:29.785: INFO: At 2019-12-28 22:29:18 +0000 UTC - event for frontend-6c5f89d5d4-wk55m: {kubelet jerma-node} Created: Created container guestbook-frontend
Dec 28 22:32:29.785: INFO: At 2019-12-28 22:29:19 +0000 UTC - event for agnhost-master-74c46fb7d4-l4p5n: {kubelet jerma-node} Created: Created container master
Dec 28 22:32:29.785: INFO: At 2019-12-28 22:29:19 +0000 UTC - event for agnhost-slave-774cfc759f-f5hhs: {kubelet jerma-node} Created: Created container slave
Dec 28 22:32:29.785: INFO: At 2019-12-28 22:29:20 +0000 UTC - event for agnhost-master-74c46fb7d4-l4p5n: {kubelet jerma-node} Started: Started container master
Dec 28 22:32:29.785: INFO: At 2019-12-28 22:29:20 +0000 UTC - event for agnhost-slave-774cfc759f-f5hhs: {kubelet jerma-node} Started: Started container slave
Dec 28 22:32:29.785: INFO: At 2019-12-28 22:29:20 +0000 UTC - event for frontend-6c5f89d5d4-wk55m: {kubelet jerma-node} Started: Started container guestbook-frontend
Dec 28 22:32:29.785: INFO: At 2019-12-28 22:32:29 +0000 UTC - event for agnhost-master-74c46fb7d4-l4p5n: {kubelet jerma-node} Killing: Stopping container master
Dec 28 22:32:29.785: INFO: At 2019-12-28 22:32:29 +0000 UTC - event for frontend-6c5f89d5d4-6zf8m: {kubelet jerma-node} Killing: Stopping container guestbook-frontend
Dec 28 22:32:29.785: INFO: At 2019-12-28 22:32:29 +0000 UTC - event for frontend-6c5f89d5d4-djqbq: {kubelet jerma-server-4b75xjbddvit} Killing: Stopping container guestbook-frontend
Dec 28 22:32:29.785: INFO: At 2019-12-28 22:32:29 +0000 UTC - event for frontend-6c5f89d5d4-wk55m: {kubelet jerma-node} Killing: Stopping container guestbook-frontend
Dec 28 22:32:29.812: INFO: POD                              NODE                       PHASE    GRACE  CONDITIONS
Dec 28 22:32:29.813: INFO: agnhost-master-74c46fb7d4-l4p5n  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 22:29:04 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 22:29:21 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 22:29:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 22:29:03 +0000 UTC  }]
Dec 28 22:32:29.813: INFO: agnhost-slave-774cfc759f-c9tqq   jerma-server-4b75xjbddvit  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 22:29:04 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 22:29:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 22:29:14 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 22:29:04 +0000 UTC  }]
Dec 28 22:32:29.813: INFO: agnhost-slave-774cfc759f-f5hhs   jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 22:29:05 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 22:29:21 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 22:29:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 22:29:04 +0000 UTC  }]
Dec 28 22:32:29.813: INFO: frontend-6c5f89d5d4-6zf8m        jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 22:29:03 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 22:29:19 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 22:29:19 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 22:29:03 +0000 UTC  }]
Dec 28 22:32:29.813: INFO: frontend-6c5f89d5d4-djqbq        jerma-server-4b75xjbddvit  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 22:29:03 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 22:29:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 22:29:14 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 22:29:03 +0000 UTC  }]
Dec 28 22:32:29.813: INFO: frontend-6c5f89d5d4-wk55m        jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 22:29:04 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 22:29:21 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 22:29:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 22:29:03 +0000 UTC  }]
Dec 28 22:32:29.813: INFO: 
Dec 28 22:32:29.824: INFO: 
Logging node info for node jerma-node
Dec 28 22:32:29.984: INFO: Node Info: &Node{ObjectMeta:{jerma-node   /api/v1/nodes/jerma-node 77a1de86-fa0a-4097-aa1b-ddd3667d796b 10440361 0 2019-10-12 13:47:49 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-node kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-12-17 21:23:22 +0000 UTC,LastTransitionTime:2019-12-17 21:23:22 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-12-28 22:31:48 +0000 UTC,LastTransitionTime:2019-10-12 13:47:49 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-12-28 22:31:48 +0000 UTC,LastTransitionTime:2019-10-12 13:47:49 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-12-28 22:31:48 +0000 UTC,LastTransitionTime:2019-10-12 13:47:49 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-12-28 22:31:48 +0000 UTC,LastTransitionTime:2019-10-12 13:48:29 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.2.170,},NodeAddress{Type:Hostname,Address:jerma-node,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4eaf1504b38c4046a625a134490a5292,SystemUUID:4EAF1504-B38C-4046-A625-A134490A5292,BootID:be260572-5100-4207-9fbc-2294735ff8aa,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.16.1,KubeProxyVersion:v1.16.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6],SizeBytes:373099368,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3],SizeBytes:288426917,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15],SizeBytes:246640776,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2],SizeBytes:148150868,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:adb4d547241d08bbb25a928b7356b9f122c4a2e81abfe47aebdd659097e79dbc k8s.gcr.io/kube-proxy:v1.16.1],SizeBytes:86061020,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2],SizeBytes:49569458,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:44100963,},ContainerImage{Names:[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine],SizeBytes:29331594,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 appropriate/curl:latest],SizeBytes:5496756,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:1303dbf110c57f3edf68d9f5a16c082ec06c4cf7604831669faf2c712260b5a0 busybox@sha256:b91fb3b63e212bb0d3dd0461025b969705b1df565a8bd454bd5095aa7bea9221],SizeBytes:1219790,},ContainerImage{Names:[busybox@sha256:1828edd60c5efd34b2bf5dd3282ec0cc04d47b2ff9caa0b6d4f07a21d1c08084],SizeBytes:1219782,},ContainerImage{Names:[busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a busybox:latest],SizeBytes:1219782,},ContainerImage{Names:[busybox@sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e],SizeBytes:1219782,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest],SizeBytes:239840,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Dec 28 22:32:29.986: INFO: 
Logging kubelet events for node jerma-node
Dec 28 22:32:29.996: INFO: 
Logging pods the kubelet thinks is on node jerma-node
Dec 28 22:32:30.065: INFO: weave-net-srfjj started at 2019-12-17 21:23:16 +0000 UTC (0+2 container statuses recorded)
Dec 28 22:32:30.066: INFO: 	Container weave ready: true, restart count 0
Dec 28 22:32:30.066: INFO: 	Container weave-npc ready: true, restart count 0
Dec 28 22:32:30.066: INFO: frontend-6c5f89d5d4-6zf8m started at 2019-12-28 22:29:03 +0000 UTC (0+1 container statuses recorded)
Dec 28 22:32:30.066: INFO: 	Container guestbook-frontend ready: true, restart count 0
Dec 28 22:32:30.066: INFO: frontend-6c5f89d5d4-wk55m started at 2019-12-28 22:29:04 +0000 UTC (0+1 container statuses recorded)
Dec 28 22:32:30.066: INFO: 	Container guestbook-frontend ready: true, restart count 0
Dec 28 22:32:30.066: INFO: kube-proxy-jcjl4 started at 2019-10-12 13:47:49 +0000 UTC (0+1 container statuses recorded)
Dec 28 22:32:30.066: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 28 22:32:30.066: INFO: agnhost-master-74c46fb7d4-l4p5n started at 2019-12-28 22:29:04 +0000 UTC (0+1 container statuses recorded)
Dec 28 22:32:30.066: INFO: 	Container master ready: true, restart count 0
Dec 28 22:32:30.066: INFO: agnhost-slave-774cfc759f-f5hhs started at 2019-12-28 22:29:05 +0000 UTC (0+1 container statuses recorded)
Dec 28 22:32:30.066: INFO: 	Container slave ready: true, restart count 0
W1228 22:32:30.128264       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 28 22:32:30.210: INFO: 
Latency metrics for node jerma-node
Dec 28 22:32:30.211: INFO: 
Logging node info for node jerma-server-4b75xjbddvit
Dec 28 22:32:30.228: INFO: Node Info: &Node{ObjectMeta:{jerma-server-4b75xjbddvit   /api/v1/nodes/jerma-server-4b75xjbddvit 65247a99-359d-4f89-a587-9b1e2846985b 10440363 0 2019-10-12 13:29:03 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-server-4b75xjbddvit kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136026112 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031168512 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-12-13 09:17:15 +0000 UTC,LastTransitionTime:2019-12-13 09:17:15 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-12-28 22:31:48 +0000 UTC,LastTransitionTime:2019-10-12 13:29:03 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-12-28 22:31:48 +0000 UTC,LastTransitionTime:2019-12-13 09:12:52 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-12-28 22:31:48 +0000 UTC,LastTransitionTime:2019-10-12 13:29:03 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-12-28 22:31:48 +0000 UTC,LastTransitionTime:2019-10-12 13:29:53 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.3.35,},NodeAddress{Type:Hostname,Address:jerma-server-4b75xjbddvit,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c617e976dd6040539102788a191b2ea4,SystemUUID:C617E976-DD60-4053-9102-788A191B2EA4,BootID:b7792a6d-7352-4851-9822-f2fa8fe18763,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.16.1,KubeProxyVersion:v1.16.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6],SizeBytes:373099368,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15 k8s.gcr.io/etcd:3.3.15-0],SizeBytes:246640776,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:80feeaed6c6445ab0ea0c27153354c3cac19b8b028d9b14fc134f947e716e25e k8s.gcr.io/kube-apiserver:v1.16.1],SizeBytes:217083230,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:36259393d3c7cb84a6420db94dccfc75faa8adc9841142467691b7123ab4e8b8 k8s.gcr.io/kube-controller-manager:v1.16.1],SizeBytes:163318238,},ContainerImage{Names:[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2],SizeBytes:148150868,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:c51d0cff4c90fd1ed1e0c62509c4bee2035f8815c68ed355d3643f0db3d084a9 k8s.gcr.io/kube-scheduler:v1.16.1],SizeBytes:87269918,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:adb4d547241d08bbb25a928b7356b9f122c4a2e81abfe47aebdd659097e79dbc k8s.gcr.io/kube-proxy:v1.16.1],SizeBytes:86061020,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2],SizeBytes:49569458,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:44100963,},ContainerImage{Names:[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine],SizeBytes:29331594,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest],SizeBytes:239840,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Dec 28 22:32:30.229: INFO: 
Logging kubelet events for node jerma-server-4b75xjbddvit
Dec 28 22:32:30.234: INFO: 
Logging pods the kubelet thinks is on node jerma-server-4b75xjbddvit
Dec 28 22:32:30.255: INFO: kube-scheduler-jerma-server-4b75xjbddvit started at 2019-10-12 13:28:42 +0000 UTC (0+1 container statuses recorded)
Dec 28 22:32:30.255: INFO: 	Container kube-scheduler ready: true, restart count 16
Dec 28 22:32:30.255: INFO: kube-proxy-bdcvr started at 2019-12-13 09:08:20 +0000 UTC (0+1 container statuses recorded)
Dec 28 22:32:30.255: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 28 22:32:30.255: INFO: coredns-5644d7b6d9-xvlxj started at 2019-12-14 16:49:52 +0000 UTC (0+1 container statuses recorded)
Dec 28 22:32:30.255: INFO: 	Container coredns ready: true, restart count 0
Dec 28 22:32:30.255: INFO: etcd-jerma-server-4b75xjbddvit started at 2019-10-12 13:28:37 +0000 UTC (0+1 container statuses recorded)
Dec 28 22:32:30.255: INFO: 	Container etcd ready: true, restart count 1
Dec 28 22:32:30.255: INFO: kube-controller-manager-jerma-server-4b75xjbddvit started at 2019-10-12 13:28:40 +0000 UTC (0+1 container statuses recorded)
Dec 28 22:32:30.255: INFO: 	Container kube-controller-manager ready: true, restart count 13
Dec 28 22:32:30.255: INFO: kube-apiserver-jerma-server-4b75xjbddvit started at 2019-10-12 13:28:38 +0000 UTC (0+1 container statuses recorded)
Dec 28 22:32:30.255: INFO: 	Container kube-apiserver ready: true, restart count 1
Dec 28 22:32:30.255: INFO: coredns-5644d7b6d9-n9kkw started at 2019-11-10 16:39:08 +0000 UTC (0+0 container statuses recorded)
Dec 28 22:32:30.255: INFO: frontend-6c5f89d5d4-djqbq started at 2019-12-28 22:29:03 +0000 UTC (0+1 container statuses recorded)
Dec 28 22:32:30.255: INFO: 	Container guestbook-frontend ready: true, restart count 0
Dec 28 22:32:30.255: INFO: agnhost-slave-774cfc759f-c9tqq started at 2019-12-28 22:29:04 +0000 UTC (0+1 container statuses recorded)
Dec 28 22:32:30.255: INFO: 	Container slave ready: true, restart count 0
Dec 28 22:32:30.255: INFO: coredns-5644d7b6d9-rqwzj started at 2019-11-10 18:03:38 +0000 UTC (0+0 container statuses recorded)
Dec 28 22:32:30.255: INFO: weave-net-gsjjk started at 2019-12-13 09:16:56 +0000 UTC (0+2 container statuses recorded)
Dec 28 22:32:30.255: INFO: 	Container weave ready: true, restart count 0
Dec 28 22:32:30.255: INFO: 	Container weave-npc ready: true, restart count 0
Dec 28 22:32:30.255: INFO: coredns-5644d7b6d9-9sj58 started at 2019-12-14 15:12:12 +0000 UTC (0+1 container statuses recorded)
Dec 28 22:32:30.255: INFO: 	Container coredns ready: true, restart count 0
W1228 22:32:30.261526       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 28 22:32:30.306: INFO: 
Latency metrics for node jerma-server-4b75xjbddvit
Dec 28 22:32:30.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9651" for this suite.

• Failure [209.852 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:385
    should create and stop a working application  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721

    Dec 28 22:32:25.200: Cannot added new entry in 180 seconds.

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2315
------------------------------
{"msg":"FAILED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":278,"completed":210,"skipped":3229,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:32:30.315: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:33:20.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-245" for this suite.

• [SLOW TEST:50.635 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":211,"skipped":3262,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:33:20.951: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 28 22:33:21.108: INFO: Waiting up to 5m0s for pod "pod-dac2056e-a151-4a6b-8c6d-923d1066736b" in namespace "emptydir-6563" to be "success or failure"
Dec 28 22:33:21.200: INFO: Pod "pod-dac2056e-a151-4a6b-8c6d-923d1066736b": Phase="Pending", Reason="", readiness=false. Elapsed: 91.63043ms
Dec 28 22:33:23.208: INFO: Pod "pod-dac2056e-a151-4a6b-8c6d-923d1066736b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099633596s
Dec 28 22:33:25.219: INFO: Pod "pod-dac2056e-a151-4a6b-8c6d-923d1066736b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111350461s
Dec 28 22:33:27.227: INFO: Pod "pod-dac2056e-a151-4a6b-8c6d-923d1066736b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119297047s
Dec 28 22:33:29.235: INFO: Pod "pod-dac2056e-a151-4a6b-8c6d-923d1066736b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.12677009s
STEP: Saw pod success
Dec 28 22:33:29.235: INFO: Pod "pod-dac2056e-a151-4a6b-8c6d-923d1066736b" satisfied condition "success or failure"
Dec 28 22:33:29.241: INFO: Trying to get logs from node jerma-node pod pod-dac2056e-a151-4a6b-8c6d-923d1066736b container test-container: 
STEP: delete the pod
Dec 28 22:33:29.408: INFO: Waiting for pod pod-dac2056e-a151-4a6b-8c6d-923d1066736b to disappear
Dec 28 22:33:29.432: INFO: Pod pod-dac2056e-a151-4a6b-8c6d-923d1066736b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:33:29.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6563" for this suite.

• [SLOW TEST:8.494 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":212,"skipped":3266,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:33:29.446: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-5558
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 28 22:33:29.644: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 28 22:34:01.972: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-5558 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 22:34:01.973: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 22:34:02.236: INFO: Waiting for responses: map[]
Dec 28 22:34:02.247: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-5558 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 22:34:02.247: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 22:34:02.460: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:34:02.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5558" for this suite.

• [SLOW TEST:33.043 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":213,"skipped":3268,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:34:02.490: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Dec 28 22:34:03.198: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Dec 28 22:34:05.223: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169243, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169243, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169243, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169243, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:34:07.231: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169243, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169243, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169243, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169243, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:34:09.271: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169243, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169243, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169243, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169243, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:34:11.231: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169243, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169243, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169243, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169243, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:34:13.232: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169243, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169243, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169243, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169243, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Dec 28 22:34:16.333: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:34:16.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5468" for this suite.
STEP: Destroying namespace "webhook-5468-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:14.210 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":214,"skipped":3276,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:34:16.701: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override command
Dec 28 22:34:16.907: INFO: Waiting up to 5m0s for pod "client-containers-793713f9-80d9-48c5-be09-4e96429a607f" in namespace "containers-9260" to be "success or failure"
Dec 28 22:34:16.937: INFO: Pod "client-containers-793713f9-80d9-48c5-be09-4e96429a607f": Phase="Pending", Reason="", readiness=false. Elapsed: 30.08796ms
Dec 28 22:34:18.947: INFO: Pod "client-containers-793713f9-80d9-48c5-be09-4e96429a607f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040113027s
Dec 28 22:34:21.023: INFO: Pod "client-containers-793713f9-80d9-48c5-be09-4e96429a607f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115897807s
Dec 28 22:34:23.032: INFO: Pod "client-containers-793713f9-80d9-48c5-be09-4e96429a607f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125118033s
Dec 28 22:34:25.084: INFO: Pod "client-containers-793713f9-80d9-48c5-be09-4e96429a607f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.176526511s
Dec 28 22:34:27.151: INFO: Pod "client-containers-793713f9-80d9-48c5-be09-4e96429a607f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.243564711s
STEP: Saw pod success
Dec 28 22:34:27.151: INFO: Pod "client-containers-793713f9-80d9-48c5-be09-4e96429a607f" satisfied condition "success or failure"
Dec 28 22:34:27.156: INFO: Trying to get logs from node jerma-node pod client-containers-793713f9-80d9-48c5-be09-4e96429a607f container test-container: 
STEP: delete the pod
Dec 28 22:34:27.202: INFO: Waiting for pod client-containers-793713f9-80d9-48c5-be09-4e96429a607f to disappear
Dec 28 22:34:27.213: INFO: Pod client-containers-793713f9-80d9-48c5-be09-4e96429a607f no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:34:27.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9260" for this suite.

• [SLOW TEST:10.546 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":215,"skipped":3308,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:34:27.249: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Dec 28 22:34:27.353: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 28 22:34:27.377: INFO: Waiting for terminating namespaces to be deleted...
Dec 28 22:34:27.415: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Dec 28 22:34:27.427: INFO: weave-net-srfjj from kube-system started at 2019-12-17 21:23:16 +0000 UTC (2 container statuses recorded)
Dec 28 22:34:27.427: INFO: 	Container weave ready: true, restart count 0
Dec 28 22:34:27.427: INFO: 	Container weave-npc ready: true, restart count 0
Dec 28 22:34:27.427: INFO: kube-proxy-jcjl4 from kube-system started at 2019-10-12 13:47:49 +0000 UTC (1 container statuses recorded)
Dec 28 22:34:27.427: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 28 22:34:27.427: INFO: 
Logging pods the kubelet thinks is on node jerma-server-4b75xjbddvit before test
Dec 28 22:34:27.447: INFO: kube-apiserver-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:38 +0000 UTC (1 container statuses recorded)
Dec 28 22:34:27.447: INFO: 	Container kube-apiserver ready: true, restart count 1
Dec 28 22:34:27.447: INFO: coredns-5644d7b6d9-n9kkw from kube-system started at 2019-11-10 16:39:08 +0000 UTC (0 container statuses recorded)
Dec 28 22:34:27.447: INFO: coredns-5644d7b6d9-rqwzj from kube-system started at 2019-11-10 18:03:38 +0000 UTC (0 container statuses recorded)
Dec 28 22:34:27.447: INFO: weave-net-gsjjk from kube-system started at 2019-12-13 09:16:56 +0000 UTC (2 container statuses recorded)
Dec 28 22:34:27.447: INFO: 	Container weave ready: true, restart count 0
Dec 28 22:34:27.447: INFO: 	Container weave-npc ready: true, restart count 0
Dec 28 22:34:27.447: INFO: coredns-5644d7b6d9-9sj58 from kube-system started at 2019-12-14 15:12:12 +0000 UTC (1 container statuses recorded)
Dec 28 22:34:27.447: INFO: 	Container coredns ready: true, restart count 0
Dec 28 22:34:27.447: INFO: kube-scheduler-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:42 +0000 UTC (1 container statuses recorded)
Dec 28 22:34:27.447: INFO: 	Container kube-scheduler ready: true, restart count 16
Dec 28 22:34:27.447: INFO: kube-proxy-bdcvr from kube-system started at 2019-12-13 09:08:20 +0000 UTC (1 container statuses recorded)
Dec 28 22:34:27.447: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 28 22:34:27.447: INFO: coredns-5644d7b6d9-xvlxj from kube-system started at 2019-12-14 16:49:52 +0000 UTC (1 container statuses recorded)
Dec 28 22:34:27.447: INFO: 	Container coredns ready: true, restart count 0
Dec 28 22:34:27.447: INFO: etcd-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:37 +0000 UTC (1 container statuses recorded)
Dec 28 22:34:27.447: INFO: 	Container etcd ready: true, restart count 1
Dec 28 22:34:27.447: INFO: kube-controller-manager-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:40 +0000 UTC (1 container statuses recorded)
Dec 28 22:34:27.447: INFO: 	Container kube-controller-manager ready: true, restart count 13
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-07757a9a-5623-4614-b424-184ce4affaec 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-07757a9a-5623-4614-b424-184ce4affaec off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-07757a9a-5623-4614-b424-184ce4affaec
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:39:45.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2143" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:318.622 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":216,"skipped":3316,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:39:45.873: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with configMap that has name projected-configmap-test-upd-fe68aa85-433a-4bb4-b1b1-4e91bef479eb
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-fe68aa85-433a-4bb4-b1b1-4e91bef479eb
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:39:58.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3142" for this suite.

• [SLOW TEST:12.267 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":217,"skipped":3337,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:39:58.140: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Dec 28 22:39:58.417: INFO: (0) /api/v1/nodes/jerma-server-4b75xjbddvit/proxy/logs/: 
alternatives.log
alternatives.l... (200; 50.767939ms)
Dec 28 22:39:58.426: INFO: (1) /api/v1/nodes/jerma-server-4b75xjbddvit/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.404588ms)
Dec 28 22:39:58.437: INFO: (2) /api/v1/nodes/jerma-server-4b75xjbddvit/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.923644ms)
Dec 28 22:39:58.444: INFO: (3) /api/v1/nodes/jerma-server-4b75xjbddvit/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.457337ms)
Dec 28 22:39:58.453: INFO: (4) /api/v1/nodes/jerma-server-4b75xjbddvit/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.353053ms)
Dec 28 22:39:58.481: INFO: (5) /api/v1/nodes/jerma-server-4b75xjbddvit/proxy/logs/: 
alternatives.log
alternatives.l... (200; 27.863572ms)
Dec 28 22:39:58.494: INFO: (6) /api/v1/nodes/jerma-server-4b75xjbddvit/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.748808ms)
Dec 28 22:39:58.503: INFO: (7) /api/v1/nodes/jerma-server-4b75xjbddvit/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.454589ms)
Dec 28 22:39:58.510: INFO: (8) /api/v1/nodes/jerma-server-4b75xjbddvit/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.94095ms)
Dec 28 22:39:58.517: INFO: (9) /api/v1/nodes/jerma-server-4b75xjbddvit/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.893087ms)
Dec 28 22:39:58.524: INFO: (10) /api/v1/nodes/jerma-server-4b75xjbddvit/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.241293ms)
Dec 28 22:39:58.530: INFO: (11) /api/v1/nodes/jerma-server-4b75xjbddvit/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.888699ms)
Dec 28 22:39:58.536: INFO: (12) /api/v1/nodes/jerma-server-4b75xjbddvit/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.832828ms)
Dec 28 22:39:58.543: INFO: (13) /api/v1/nodes/jerma-server-4b75xjbddvit/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.862027ms)
Dec 28 22:39:58.546: INFO: (14) /api/v1/nodes/jerma-server-4b75xjbddvit/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.716947ms)
Dec 28 22:39:58.554: INFO: (15) /api/v1/nodes/jerma-server-4b75xjbddvit/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.915982ms)
Dec 28 22:39:58.560: INFO: (16) /api/v1/nodes/jerma-server-4b75xjbddvit/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.262638ms)
Dec 28 22:39:58.563: INFO: (17) /api/v1/nodes/jerma-server-4b75xjbddvit/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.712447ms)
Dec 28 22:39:58.569: INFO: (18) /api/v1/nodes/jerma-server-4b75xjbddvit/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.379542ms)
Dec 28 22:39:58.575: INFO: (19) /api/v1/nodes/jerma-server-4b75xjbddvit/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.320845ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:39:58.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-2391" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource  [Conformance]","total":278,"completed":218,"skipped":3367,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:39:58.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 28 22:40:07.832: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:40:07.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1339" for this suite.

• [SLOW TEST:9.346 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":219,"skipped":3386,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:40:07.946: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating secret secrets-9605/secret-test-b5fe642d-08f7-4afc-8493-21f01a964a10
STEP: Creating a pod to test consume secrets
Dec 28 22:40:08.036: INFO: Waiting up to 5m0s for pod "pod-configmaps-a03c8f38-70e5-4f15-8ccd-86b27cfe0c5d" in namespace "secrets-9605" to be "success or failure"
Dec 28 22:40:08.183: INFO: Pod "pod-configmaps-a03c8f38-70e5-4f15-8ccd-86b27cfe0c5d": Phase="Pending", Reason="", readiness=false. Elapsed: 147.269606ms
Dec 28 22:40:10.190: INFO: Pod "pod-configmaps-a03c8f38-70e5-4f15-8ccd-86b27cfe0c5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153897133s
Dec 28 22:40:12.202: INFO: Pod "pod-configmaps-a03c8f38-70e5-4f15-8ccd-86b27cfe0c5d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.165576605s
Dec 28 22:40:14.211: INFO: Pod "pod-configmaps-a03c8f38-70e5-4f15-8ccd-86b27cfe0c5d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.17529861s
Dec 28 22:40:16.230: INFO: Pod "pod-configmaps-a03c8f38-70e5-4f15-8ccd-86b27cfe0c5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.193428748s
STEP: Saw pod success
Dec 28 22:40:16.230: INFO: Pod "pod-configmaps-a03c8f38-70e5-4f15-8ccd-86b27cfe0c5d" satisfied condition "success or failure"
Dec 28 22:40:16.237: INFO: Trying to get logs from node jerma-node pod pod-configmaps-a03c8f38-70e5-4f15-8ccd-86b27cfe0c5d container env-test: 
STEP: delete the pod
Dec 28 22:40:16.315: INFO: Waiting for pod pod-configmaps-a03c8f38-70e5-4f15-8ccd-86b27cfe0c5d to disappear
Dec 28 22:40:16.322: INFO: Pod pod-configmaps-a03c8f38-70e5-4f15-8ccd-86b27cfe0c5d no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:40:16.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9605" for this suite.

• [SLOW TEST:8.391 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":220,"skipped":3386,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:40:16.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Dec 28 22:40:16.466: INFO: Waiting up to 5m0s for pod "downwardapi-volume-04c9ccef-af65-468e-91a8-93918b650ce9" in namespace "projected-9255" to be "success or failure"
Dec 28 22:40:16.485: INFO: Pod "downwardapi-volume-04c9ccef-af65-468e-91a8-93918b650ce9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.466766ms
Dec 28 22:40:18.507: INFO: Pod "downwardapi-volume-04c9ccef-af65-468e-91a8-93918b650ce9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040895764s
Dec 28 22:40:20.521: INFO: Pod "downwardapi-volume-04c9ccef-af65-468e-91a8-93918b650ce9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054395294s
Dec 28 22:40:22.546: INFO: Pod "downwardapi-volume-04c9ccef-af65-468e-91a8-93918b650ce9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078999827s
Dec 28 22:40:24.561: INFO: Pod "downwardapi-volume-04c9ccef-af65-468e-91a8-93918b650ce9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.094308858s
STEP: Saw pod success
Dec 28 22:40:24.561: INFO: Pod "downwardapi-volume-04c9ccef-af65-468e-91a8-93918b650ce9" satisfied condition "success or failure"
Dec 28 22:40:24.565: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-04c9ccef-af65-468e-91a8-93918b650ce9 container client-container: 
STEP: delete the pod
Dec 28 22:40:24.633: INFO: Waiting for pod downwardapi-volume-04c9ccef-af65-468e-91a8-93918b650ce9 to disappear
Dec 28 22:40:24.637: INFO: Pod downwardapi-volume-04c9ccef-af65-468e-91a8-93918b650ce9 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:40:24.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9255" for this suite.

• [SLOW TEST:8.337 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3463,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:40:24.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Dec 28 22:40:24.762: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 28 22:40:24.820: INFO: Waiting for terminating namespaces to be deleted...
Dec 28 22:40:24.830: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Dec 28 22:40:24.837: INFO: kube-proxy-jcjl4 from kube-system started at 2019-10-12 13:47:49 +0000 UTC (1 container statuses recorded)
Dec 28 22:40:24.837: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 28 22:40:24.837: INFO: weave-net-srfjj from kube-system started at 2019-12-17 21:23:16 +0000 UTC (2 container statuses recorded)
Dec 28 22:40:24.837: INFO: 	Container weave ready: true, restart count 0
Dec 28 22:40:24.837: INFO: 	Container weave-npc ready: true, restart count 0
Dec 28 22:40:24.837: INFO: 
Logging pods the kubelet thinks is on node jerma-server-4b75xjbddvit before test
Dec 28 22:40:24.847: INFO: coredns-5644d7b6d9-rqwzj from kube-system started at 2019-11-10 18:03:38 +0000 UTC (0 container statuses recorded)
Dec 28 22:40:24.848: INFO: weave-net-gsjjk from kube-system started at 2019-12-13 09:16:56 +0000 UTC (2 container statuses recorded)
Dec 28 22:40:24.848: INFO: 	Container weave ready: true, restart count 0
Dec 28 22:40:24.848: INFO: 	Container weave-npc ready: true, restart count 0
Dec 28 22:40:24.848: INFO: coredns-5644d7b6d9-9sj58 from kube-system started at 2019-12-14 15:12:12 +0000 UTC (1 container statuses recorded)
Dec 28 22:40:24.848: INFO: 	Container coredns ready: true, restart count 0
Dec 28 22:40:24.848: INFO: kube-scheduler-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:42 +0000 UTC (1 container statuses recorded)
Dec 28 22:40:24.848: INFO: 	Container kube-scheduler ready: true, restart count 16
Dec 28 22:40:24.848: INFO: kube-proxy-bdcvr from kube-system started at 2019-12-13 09:08:20 +0000 UTC (1 container statuses recorded)
Dec 28 22:40:24.848: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 28 22:40:24.848: INFO: coredns-5644d7b6d9-xvlxj from kube-system started at 2019-12-14 16:49:52 +0000 UTC (1 container statuses recorded)
Dec 28 22:40:24.848: INFO: 	Container coredns ready: true, restart count 0
Dec 28 22:40:24.848: INFO: etcd-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:37 +0000 UTC (1 container statuses recorded)
Dec 28 22:40:24.848: INFO: 	Container etcd ready: true, restart count 1
Dec 28 22:40:24.848: INFO: kube-controller-manager-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:40 +0000 UTC (1 container statuses recorded)
Dec 28 22:40:24.848: INFO: 	Container kube-controller-manager ready: true, restart count 13
Dec 28 22:40:24.848: INFO: kube-apiserver-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:38 +0000 UTC (1 container statuses recorded)
Dec 28 22:40:24.848: INFO: 	Container kube-apiserver ready: true, restart count 1
Dec 28 22:40:24.848: INFO: coredns-5644d7b6d9-n9kkw from kube-system started at 2019-11-10 16:39:08 +0000 UTC (0 container statuses recorded)
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-290ac5a1-ceca-473c-9797-8ace27084a61 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-290ac5a1-ceca-473c-9797-8ace27084a61 off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-290ac5a1-ceca-473c-9797-8ace27084a61
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:40:40.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2498" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:16.352 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":278,"completed":222,"skipped":3480,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:40:41.028: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Dec 28 22:40:41.186: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cf3f1678-0ac4-4234-a8f2-f6a264cea075" in namespace "projected-9511" to be "success or failure"
Dec 28 22:40:41.217: INFO: Pod "downwardapi-volume-cf3f1678-0ac4-4234-a8f2-f6a264cea075": Phase="Pending", Reason="", readiness=false. Elapsed: 31.197282ms
Dec 28 22:40:43.225: INFO: Pod "downwardapi-volume-cf3f1678-0ac4-4234-a8f2-f6a264cea075": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038825966s
Dec 28 22:40:45.233: INFO: Pod "downwardapi-volume-cf3f1678-0ac4-4234-a8f2-f6a264cea075": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047061324s
Dec 28 22:40:47.237: INFO: Pod "downwardapi-volume-cf3f1678-0ac4-4234-a8f2-f6a264cea075": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051069896s
Dec 28 22:40:49.251: INFO: Pod "downwardapi-volume-cf3f1678-0ac4-4234-a8f2-f6a264cea075": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065480827s
Dec 28 22:40:51.257: INFO: Pod "downwardapi-volume-cf3f1678-0ac4-4234-a8f2-f6a264cea075": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.071355111s
STEP: Saw pod success
Dec 28 22:40:51.257: INFO: Pod "downwardapi-volume-cf3f1678-0ac4-4234-a8f2-f6a264cea075" satisfied condition "success or failure"
Dec 28 22:40:51.260: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-cf3f1678-0ac4-4234-a8f2-f6a264cea075 container client-container: 
STEP: delete the pod
Dec 28 22:40:51.317: INFO: Waiting for pod downwardapi-volume-cf3f1678-0ac4-4234-a8f2-f6a264cea075 to disappear
Dec 28 22:40:51.331: INFO: Pod downwardapi-volume-cf3f1678-0ac4-4234-a8f2-f6a264cea075 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:40:51.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9511" for this suite.

• [SLOW TEST:10.318 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":223,"skipped":3482,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:40:51.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Dec 28 22:40:51.544: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7b6a36f7-1b27-4c5e-b160-91a8c0d1c12d" in namespace "downward-api-9286" to be "success or failure"
Dec 28 22:40:51.556: INFO: Pod "downwardapi-volume-7b6a36f7-1b27-4c5e-b160-91a8c0d1c12d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.551311ms
Dec 28 22:40:53.565: INFO: Pod "downwardapi-volume-7b6a36f7-1b27-4c5e-b160-91a8c0d1c12d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021264969s
Dec 28 22:40:55.574: INFO: Pod "downwardapi-volume-7b6a36f7-1b27-4c5e-b160-91a8c0d1c12d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030078182s
Dec 28 22:40:57.582: INFO: Pod "downwardapi-volume-7b6a36f7-1b27-4c5e-b160-91a8c0d1c12d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038687036s
Dec 28 22:40:59.593: INFO: Pod "downwardapi-volume-7b6a36f7-1b27-4c5e-b160-91a8c0d1c12d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.049187723s
STEP: Saw pod success
Dec 28 22:40:59.593: INFO: Pod "downwardapi-volume-7b6a36f7-1b27-4c5e-b160-91a8c0d1c12d" satisfied condition "success or failure"
Dec 28 22:40:59.598: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-7b6a36f7-1b27-4c5e-b160-91a8c0d1c12d container client-container: 
STEP: delete the pod
Dec 28 22:40:59.631: INFO: Waiting for pod downwardapi-volume-7b6a36f7-1b27-4c5e-b160-91a8c0d1c12d to disappear
Dec 28 22:40:59.687: INFO: Pod downwardapi-volume-7b6a36f7-1b27-4c5e-b160-91a8c0d1c12d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:40:59.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9286" for this suite.

• [SLOW TEST:8.359 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":224,"skipped":3492,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:40:59.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
Dec 28 22:41:10.383: INFO: Successfully updated pod "adopt-release-kgj5v"
STEP: Checking that the Job readopts the Pod
Dec 28 22:41:10.384: INFO: Waiting up to 15m0s for pod "adopt-release-kgj5v" in namespace "job-6903" to be "adopted"
Dec 28 22:41:10.388: INFO: Pod "adopt-release-kgj5v": Phase="Running", Reason="", readiness=true. Elapsed: 4.243582ms
Dec 28 22:41:12.407: INFO: Pod "adopt-release-kgj5v": Phase="Running", Reason="", readiness=true. Elapsed: 2.023595485s
Dec 28 22:41:12.408: INFO: Pod "adopt-release-kgj5v" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
Dec 28 22:41:12.941: INFO: Successfully updated pod "adopt-release-kgj5v"
STEP: Checking that the Job releases the Pod
Dec 28 22:41:12.941: INFO: Waiting up to 15m0s for pod "adopt-release-kgj5v" in namespace "job-6903" to be "released"
Dec 28 22:41:12.956: INFO: Pod "adopt-release-kgj5v": Phase="Running", Reason="", readiness=true. Elapsed: 14.526ms
Dec 28 22:41:12.956: INFO: Pod "adopt-release-kgj5v" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:41:12.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-6903" for this suite.

• [SLOW TEST:13.393 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":225,"skipped":3530,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:41:13.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Dec 28 22:41:13.352: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the sample API server.
Dec 28 22:41:15.172: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Dec 28 22:41:18.579: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169675, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169675, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169675, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169674, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:41:20.594: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169675, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169675, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169675, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169674, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:41:22.811: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169675, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169675, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169675, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169674, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:41:24.593: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169675, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169675, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169675, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169674, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:41:27.690: INFO: Waited 1.036465914s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:41:28.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-4797" for this suite.

• [SLOW TEST:15.167 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":226,"skipped":3538,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-cli] Kubectl client Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:41:28.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the initial replication controller
Dec 28 22:41:28.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5003'
Dec 28 22:41:28.787: INFO: stderr: ""
Dec 28 22:41:28.787: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 28 22:41:28.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5003'
Dec 28 22:41:29.072: INFO: stderr: ""
Dec 28 22:41:29.073: INFO: stdout: "update-demo-nautilus-24hrv update-demo-nautilus-vfdx5 "
Dec 28 22:41:29.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-24hrv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5003'
Dec 28 22:41:29.233: INFO: stderr: ""
Dec 28 22:41:29.233: INFO: stdout: ""
Dec 28 22:41:29.233: INFO: update-demo-nautilus-24hrv is created but not running
Dec 28 22:41:34.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5003'
Dec 28 22:41:34.403: INFO: stderr: ""
Dec 28 22:41:34.403: INFO: stdout: "update-demo-nautilus-24hrv update-demo-nautilus-vfdx5 "
Dec 28 22:41:34.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-24hrv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5003'
Dec 28 22:41:34.728: INFO: stderr: ""
Dec 28 22:41:34.728: INFO: stdout: ""
Dec 28 22:41:34.728: INFO: update-demo-nautilus-24hrv is created but not running
Dec 28 22:41:39.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5003'
Dec 28 22:41:39.927: INFO: stderr: ""
Dec 28 22:41:39.927: INFO: stdout: "update-demo-nautilus-24hrv update-demo-nautilus-vfdx5 "
Dec 28 22:41:39.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-24hrv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5003'
Dec 28 22:41:40.083: INFO: stderr: ""
Dec 28 22:41:40.083: INFO: stdout: "true"
Dec 28 22:41:40.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-24hrv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5003'
Dec 28 22:41:40.166: INFO: stderr: ""
Dec 28 22:41:40.166: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 28 22:41:40.166: INFO: validating pod update-demo-nautilus-24hrv
Dec 28 22:41:40.173: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 28 22:41:40.173: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 28 22:41:40.173: INFO: update-demo-nautilus-24hrv is verified up and running
Dec 28 22:41:40.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vfdx5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5003'
Dec 28 22:41:40.310: INFO: stderr: ""
Dec 28 22:41:40.310: INFO: stdout: "true"
Dec 28 22:41:40.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vfdx5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5003'
Dec 28 22:41:40.395: INFO: stderr: ""
Dec 28 22:41:40.395: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 28 22:41:40.395: INFO: validating pod update-demo-nautilus-vfdx5
Dec 28 22:41:40.456: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 28 22:41:40.457: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 28 22:41:40.457: INFO: update-demo-nautilus-vfdx5 is verified up and running
STEP: rolling-update to new replication controller
Dec 28 22:41:40.486: INFO: scanned /root for discovery docs: 
Dec 28 22:41:40.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-5003'
Dec 28 22:42:10.694: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 28 22:42:10.695: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 28 22:42:10.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5003'
Dec 28 22:42:10.921: INFO: stderr: ""
Dec 28 22:42:10.921: INFO: stdout: "update-demo-kitten-72wrq update-demo-kitten-tv2gk "
Dec 28 22:42:10.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-72wrq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5003'
Dec 28 22:42:11.118: INFO: stderr: ""
Dec 28 22:42:11.118: INFO: stdout: "true"
Dec 28 22:42:11.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-72wrq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5003'
Dec 28 22:42:11.205: INFO: stderr: ""
Dec 28 22:42:11.205: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 28 22:42:11.205: INFO: validating pod update-demo-kitten-72wrq
Dec 28 22:42:11.220: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 28 22:42:11.220: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 28 22:42:11.220: INFO: update-demo-kitten-72wrq is verified up and running
Dec 28 22:42:11.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-tv2gk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5003'
Dec 28 22:42:11.337: INFO: stderr: ""
Dec 28 22:42:11.337: INFO: stdout: "true"
Dec 28 22:42:11.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-tv2gk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5003'
Dec 28 22:42:11.469: INFO: stderr: ""
Dec 28 22:42:11.469: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 28 22:42:11.469: INFO: validating pod update-demo-kitten-tv2gk
Dec 28 22:42:11.478: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 28 22:42:11.478: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 28 22:42:11.478: INFO: update-demo-kitten-tv2gk is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:42:11.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5003" for this suite.

• [SLOW TEST:43.218 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller  [Conformance]","total":278,"completed":227,"skipped":3538,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:42:11.488: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Dec 28 22:42:11.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7974'
Dec 28 22:42:12.053: INFO: stderr: ""
Dec 28 22:42:12.053: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Dec 28 22:42:13.060: INFO: Selector matched 1 pods for map[app:agnhost]
Dec 28 22:42:13.060: INFO: Found 0 / 1
Dec 28 22:42:14.074: INFO: Selector matched 1 pods for map[app:agnhost]
Dec 28 22:42:14.075: INFO: Found 0 / 1
Dec 28 22:42:15.059: INFO: Selector matched 1 pods for map[app:agnhost]
Dec 28 22:42:15.059: INFO: Found 0 / 1
Dec 28 22:42:16.060: INFO: Selector matched 1 pods for map[app:agnhost]
Dec 28 22:42:16.060: INFO: Found 0 / 1
Dec 28 22:42:17.074: INFO: Selector matched 1 pods for map[app:agnhost]
Dec 28 22:42:17.074: INFO: Found 0 / 1
Dec 28 22:42:18.341: INFO: Selector matched 1 pods for map[app:agnhost]
Dec 28 22:42:18.342: INFO: Found 0 / 1
Dec 28 22:42:19.061: INFO: Selector matched 1 pods for map[app:agnhost]
Dec 28 22:42:19.061: INFO: Found 0 / 1
Dec 28 22:42:20.061: INFO: Selector matched 1 pods for map[app:agnhost]
Dec 28 22:42:20.061: INFO: Found 0 / 1
Dec 28 22:42:21.063: INFO: Selector matched 1 pods for map[app:agnhost]
Dec 28 22:42:21.063: INFO: Found 0 / 1
Dec 28 22:42:22.062: INFO: Selector matched 1 pods for map[app:agnhost]
Dec 28 22:42:22.062: INFO: Found 0 / 1
Dec 28 22:42:23.062: INFO: Selector matched 1 pods for map[app:agnhost]
Dec 28 22:42:23.062: INFO: Found 1 / 1
Dec 28 22:42:23.062: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Dec 28 22:42:23.066: INFO: Selector matched 1 pods for map[app:agnhost]
Dec 28 22:42:23.066: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 28 22:42:23.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-crm94 --namespace=kubectl-7974 -p {"metadata":{"annotations":{"x":"y"}}}'
Dec 28 22:42:23.266: INFO: stderr: ""
Dec 28 22:42:23.266: INFO: stdout: "pod/agnhost-master-crm94 patched\n"
STEP: checking annotations
Dec 28 22:42:23.272: INFO: Selector matched 1 pods for map[app:agnhost]
Dec 28 22:42:23.272: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:42:23.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7974" for this suite.

• [SLOW TEST:11.798 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1519
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":278,"completed":228,"skipped":3538,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:42:23.287: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:42:53.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-670" for this suite.

• [SLOW TEST:30.136 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":229,"skipped":3541,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:42:53.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Dec 28 22:42:54.330: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Dec 28 22:42:56.352: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169774, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169774, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169774, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169774, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:42:58.362: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169774, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169774, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169774, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169774, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:43:00.361: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169774, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169774, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169774, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169774, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Dec 28 22:43:03.467: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:43:03.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3009" for this suite.
STEP: Destroying namespace "webhook-3009-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.536 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":230,"skipped":3549,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:43:03.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:43:21.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4217" for this suite.

• [SLOW TEST:17.346 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":231,"skipped":3571,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:43:21.309: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Dec 28 22:43:22.105: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Dec 28 22:43:24.120: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169802, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169802, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169802, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169802, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:43:26.128: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169802, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169802, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169802, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169802, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:43:28.127: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169802, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169802, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169802, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169802, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Dec 28 22:43:31.175: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:43:31.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1237" for this suite.
STEP: Destroying namespace "webhook-1237-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.026 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":232,"skipped":3600,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:43:31.337: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name secret-emptykey-test-1675196e-c820-4b70-9bfb-3332301f3b4f
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:43:31.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-324" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":233,"skipped":3636,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:43:31.412: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:43:42.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4416" for this suite.

• [SLOW TEST:11.296 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":234,"skipped":3651,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:43:42.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-69dd3586-3db0-4379-be15-119138f5ad6a
STEP: Creating a pod to test consume secrets
Dec 28 22:43:42.925: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7954aeec-0aa7-4783-8931-8fa37f843df6" in namespace "projected-1459" to be "success or failure"
Dec 28 22:43:42.975: INFO: Pod "pod-projected-secrets-7954aeec-0aa7-4783-8931-8fa37f843df6": Phase="Pending", Reason="", readiness=false. Elapsed: 50.44143ms
Dec 28 22:43:44.986: INFO: Pod "pod-projected-secrets-7954aeec-0aa7-4783-8931-8fa37f843df6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060918295s
Dec 28 22:43:46.994: INFO: Pod "pod-projected-secrets-7954aeec-0aa7-4783-8931-8fa37f843df6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069201037s
Dec 28 22:43:49.038: INFO: Pod "pod-projected-secrets-7954aeec-0aa7-4783-8931-8fa37f843df6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113462662s
Dec 28 22:43:51.044: INFO: Pod "pod-projected-secrets-7954aeec-0aa7-4783-8931-8fa37f843df6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.11924931s
STEP: Saw pod success
Dec 28 22:43:51.044: INFO: Pod "pod-projected-secrets-7954aeec-0aa7-4783-8931-8fa37f843df6" satisfied condition "success or failure"
Dec 28 22:43:51.047: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-7954aeec-0aa7-4783-8931-8fa37f843df6 container projected-secret-volume-test: 
STEP: delete the pod
Dec 28 22:43:51.127: INFO: Waiting for pod pod-projected-secrets-7954aeec-0aa7-4783-8931-8fa37f843df6 to disappear
Dec 28 22:43:51.148: INFO: Pod pod-projected-secrets-7954aeec-0aa7-4783-8931-8fa37f843df6 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:43:51.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1459" for this suite.

• [SLOW TEST:8.453 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":3661,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:43:51.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1841
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Dec 28 22:43:51.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-816'
Dec 28 22:43:54.115: INFO: stderr: ""
Dec 28 22:43:54.116: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1846
Dec 28 22:43:54.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-816'
Dec 28 22:44:06.660: INFO: stderr: ""
Dec 28 22:44:06.660: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:44:06.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-816" for this suite.

• [SLOW TEST:15.521 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":278,"completed":236,"skipped":3675,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:44:06.685: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's args
Dec 28 22:44:06.881: INFO: Waiting up to 5m0s for pod "var-expansion-41ddfeed-40f8-49a8-9575-cb8bbe8ad46d" in namespace "var-expansion-15" to be "success or failure"
Dec 28 22:44:06.905: INFO: Pod "var-expansion-41ddfeed-40f8-49a8-9575-cb8bbe8ad46d": Phase="Pending", Reason="", readiness=false. Elapsed: 23.208026ms
Dec 28 22:44:08.913: INFO: Pod "var-expansion-41ddfeed-40f8-49a8-9575-cb8bbe8ad46d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031220732s
Dec 28 22:44:10.923: INFO: Pod "var-expansion-41ddfeed-40f8-49a8-9575-cb8bbe8ad46d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041534907s
Dec 28 22:44:12.930: INFO: Pod "var-expansion-41ddfeed-40f8-49a8-9575-cb8bbe8ad46d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048171688s
Dec 28 22:44:14.940: INFO: Pod "var-expansion-41ddfeed-40f8-49a8-9575-cb8bbe8ad46d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058200741s
STEP: Saw pod success
Dec 28 22:44:14.940: INFO: Pod "var-expansion-41ddfeed-40f8-49a8-9575-cb8bbe8ad46d" satisfied condition "success or failure"
Dec 28 22:44:14.943: INFO: Trying to get logs from node jerma-node pod var-expansion-41ddfeed-40f8-49a8-9575-cb8bbe8ad46d container dapi-container: 
STEP: delete the pod
Dec 28 22:44:15.092: INFO: Waiting for pod var-expansion-41ddfeed-40f8-49a8-9575-cb8bbe8ad46d to disappear
Dec 28 22:44:15.095: INFO: Pod var-expansion-41ddfeed-40f8-49a8-9575-cb8bbe8ad46d no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:44:15.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-15" for this suite.

• [SLOW TEST:8.486 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":3676,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:44:15.172: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 28 22:44:31.417: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 28 22:44:31.426: INFO: Pod pod-with-poststart-http-hook still exists
Dec 28 22:44:33.434: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 28 22:44:33.451: INFO: Pod pod-with-poststart-http-hook still exists
Dec 28 22:44:35.427: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 28 22:44:35.434: INFO: Pod pod-with-poststart-http-hook still exists
Dec 28 22:44:37.427: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 28 22:44:37.435: INFO: Pod pod-with-poststart-http-hook still exists
Dec 28 22:44:39.427: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 28 22:44:39.434: INFO: Pod pod-with-poststart-http-hook still exists
Dec 28 22:44:41.427: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 28 22:44:41.437: INFO: Pod pod-with-poststart-http-hook still exists
Dec 28 22:44:43.427: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 28 22:44:43.439: INFO: Pod pod-with-poststart-http-hook still exists
Dec 28 22:44:45.427: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 28 22:44:45.444: INFO: Pod pod-with-poststart-http-hook still exists
Dec 28 22:44:47.427: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 28 22:44:47.437: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:44:47.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-5011" for this suite.

• [SLOW TEST:32.296 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":238,"skipped":3689,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:44:47.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Dec 28 22:44:47.632: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Dec 28 22:44:57.733: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 22:45:00.762: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:45:12.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3900" for this suite.

• [SLOW TEST:24.811 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":239,"skipped":3698,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:45:12.281: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-ef3d6ce0-0c6f-4353-a6da-435fecb4f62a
STEP: Creating a pod to test consume secrets
Dec 28 22:45:12.401: INFO: Waiting up to 5m0s for pod "pod-secrets-10d4ba91-9b04-4634-b861-fe89bd5fcad5" in namespace "secrets-4393" to be "success or failure"
Dec 28 22:45:12.408: INFO: Pod "pod-secrets-10d4ba91-9b04-4634-b861-fe89bd5fcad5": Phase="Pending", Reason="", readiness=false. Elapsed: 7.054248ms
Dec 28 22:45:14.417: INFO: Pod "pod-secrets-10d4ba91-9b04-4634-b861-fe89bd5fcad5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015660716s
Dec 28 22:45:16.427: INFO: Pod "pod-secrets-10d4ba91-9b04-4634-b861-fe89bd5fcad5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025811187s
Dec 28 22:45:18.438: INFO: Pod "pod-secrets-10d4ba91-9b04-4634-b861-fe89bd5fcad5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037182594s
Dec 28 22:45:20.448: INFO: Pod "pod-secrets-10d4ba91-9b04-4634-b861-fe89bd5fcad5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.046866541s
STEP: Saw pod success
Dec 28 22:45:20.448: INFO: Pod "pod-secrets-10d4ba91-9b04-4634-b861-fe89bd5fcad5" satisfied condition "success or failure"
Dec 28 22:45:20.453: INFO: Trying to get logs from node jerma-node pod pod-secrets-10d4ba91-9b04-4634-b861-fe89bd5fcad5 container secret-volume-test: 
STEP: delete the pod
Dec 28 22:45:20.520: INFO: Waiting for pod pod-secrets-10d4ba91-9b04-4634-b861-fe89bd5fcad5 to disappear
Dec 28 22:45:20.530: INFO: Pod pod-secrets-10d4ba91-9b04-4634-b861-fe89bd5fcad5 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:45:20.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4393" for this suite.

• [SLOW TEST:8.273 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":240,"skipped":3708,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:45:20.555: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Dec 28 22:45:20.705: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Dec 28 22:45:20.725: INFO: Pod name sample-pod: Found 0 pods out of 1
Dec 28 22:45:25.735: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 28 22:45:27.771: INFO: Creating deployment "test-rolling-update-deployment"
Dec 28 22:45:27.781: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Dec 28 22:45:27.812: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Dec 28 22:45:29.924: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Dec 28 22:45:29.941: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169927, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169927, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169927, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169927, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:45:31.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169927, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169927, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169927, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169927, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:45:33.961: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169927, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169927, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169927, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169927, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:45:35.949: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Dec 28 22:45:35.972: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-3352 /apis/apps/v1/namespaces/deployment-3352/deployments/test-rolling-update-deployment cfa1dc6f-9e5d-4e26-a3f8-625996a7c6c7 10442870 1 2019-12-28 22:45:27 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0041c1178  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2019-12-28 22:45:27 +0000 UTC,LastTransitionTime:2019-12-28 22:45:27 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2019-12-28 22:45:34 +0000 UTC,LastTransitionTime:2019-12-28 22:45:27 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Dec 28 22:45:35.977: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444  deployment-3352 /apis/apps/v1/namespaces/deployment-3352/replicasets/test-rolling-update-deployment-67cf4f6444 276c59d4-4248-4b49-90a2-110b7d586371 10442859 1 2019-12-28 22:45:27 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment cfa1dc6f-9e5d-4e26-a3f8-625996a7c6c7 0xc0041c17f7 0xc0041c17f8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0041c18e8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Dec 28 22:45:35.977: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Dec 28 22:45:35.977: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-3352 /apis/apps/v1/namespaces/deployment-3352/replicasets/test-rolling-update-controller 6e239386-2a53-42de-b562-b6276178234f 10442868 2 2019-12-28 22:45:20 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment cfa1dc6f-9e5d-4e26-a3f8-625996a7c6c7 0xc0041c16af 0xc0041c16c0}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0041c1748  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Dec 28 22:45:35.983: INFO: Pod "test-rolling-update-deployment-67cf4f6444-h7bf4" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-h7bf4 test-rolling-update-deployment-67cf4f6444- deployment-3352 /api/v1/namespaces/deployment-3352/pods/test-rolling-update-deployment-67cf4f6444-h7bf4 e142c925-387d-41b4-8a9c-6e6cc8f407b1 10442858 0 2019-12-28 22:45:27 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 276c59d4-4248-4b49-90a2-110b7d586371 0xc004297567 0xc004297568}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gb5nr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gb5nr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gb5nr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 22:45:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 22:45:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 22:45:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-28 22:45:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.170,PodIP:10.44.0.2,StartTime:2019-12-28 22:45:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2019-12-28 22:45:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://003520cbc86433698192d1bb93c02356aa2b4ad433a317f473b8f1e308369f9f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:45:35.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3352" for this suite.

• [SLOW TEST:15.449 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":241,"skipped":3711,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:45:36.005: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Dec 28 22:45:37.365: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Dec 28 22:45:39.384: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169937, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169937, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169937, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169937, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:45:41.414: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169937, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169937, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169937, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169937, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:45:43.393: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169937, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169937, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169937, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169937, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:45:45.394: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169937, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169937, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169937, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169937, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Dec 28 22:45:48.553: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:45:49.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3318" for this suite.
STEP: Destroying namespace "webhook-3318-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:13.215 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":242,"skipped":3760,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:45:49.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-3f8b5dd8-928c-4f93-9fef-eb2e6c7989ca
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-3f8b5dd8-928c-4f93-9fef-eb2e6c7989ca
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:46:03.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5078" for this suite.

• [SLOW TEST:14.378 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":243,"skipped":3761,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:46:03.601: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Dec 28 22:46:03.796: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1fe047ba-bb8d-4195-ace7-dd1ec69d9499" in namespace "projected-7587" to be "success or failure"
Dec 28 22:46:03.934: INFO: Pod "downwardapi-volume-1fe047ba-bb8d-4195-ace7-dd1ec69d9499": Phase="Pending", Reason="", readiness=false. Elapsed: 137.79644ms
Dec 28 22:46:05.953: INFO: Pod "downwardapi-volume-1fe047ba-bb8d-4195-ace7-dd1ec69d9499": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156313537s
Dec 28 22:46:07.961: INFO: Pod "downwardapi-volume-1fe047ba-bb8d-4195-ace7-dd1ec69d9499": Phase="Pending", Reason="", readiness=false. Elapsed: 4.16437127s
Dec 28 22:46:09.970: INFO: Pod "downwardapi-volume-1fe047ba-bb8d-4195-ace7-dd1ec69d9499": Phase="Pending", Reason="", readiness=false. Elapsed: 6.173734824s
Dec 28 22:46:11.978: INFO: Pod "downwardapi-volume-1fe047ba-bb8d-4195-ace7-dd1ec69d9499": Phase="Pending", Reason="", readiness=false. Elapsed: 8.181520039s
Dec 28 22:46:13.996: INFO: Pod "downwardapi-volume-1fe047ba-bb8d-4195-ace7-dd1ec69d9499": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.200137528s
STEP: Saw pod success
Dec 28 22:46:13.997: INFO: Pod "downwardapi-volume-1fe047ba-bb8d-4195-ace7-dd1ec69d9499" satisfied condition "success or failure"
Dec 28 22:46:14.008: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-1fe047ba-bb8d-4195-ace7-dd1ec69d9499 container client-container: 
STEP: delete the pod
Dec 28 22:46:14.072: INFO: Waiting for pod downwardapi-volume-1fe047ba-bb8d-4195-ace7-dd1ec69d9499 to disappear
Dec 28 22:46:14.189: INFO: Pod downwardapi-volume-1fe047ba-bb8d-4195-ace7-dd1ec69d9499 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:46:14.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7587" for this suite.

• [SLOW TEST:10.621 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":244,"skipped":3767,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:46:14.222: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 28 22:46:22.600: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:46:22.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2248" for this suite.

• [SLOW TEST:8.487 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":245,"skipped":3773,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:46:22.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Dec 28 22:46:22.827: INFO: Waiting up to 5m0s for pod "downward-api-03ddd7a7-9717-4ffe-9147-3230e02250c7" in namespace "downward-api-1961" to be "success or failure"
Dec 28 22:46:22.858: INFO: Pod "downward-api-03ddd7a7-9717-4ffe-9147-3230e02250c7": Phase="Pending", Reason="", readiness=false. Elapsed: 30.491971ms
Dec 28 22:46:24.865: INFO: Pod "downward-api-03ddd7a7-9717-4ffe-9147-3230e02250c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037447455s
Dec 28 22:46:26.872: INFO: Pod "downward-api-03ddd7a7-9717-4ffe-9147-3230e02250c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044155353s
Dec 28 22:46:28.884: INFO: Pod "downward-api-03ddd7a7-9717-4ffe-9147-3230e02250c7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056079317s
Dec 28 22:46:30.900: INFO: Pod "downward-api-03ddd7a7-9717-4ffe-9147-3230e02250c7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.072299204s
Dec 28 22:46:32.908: INFO: Pod "downward-api-03ddd7a7-9717-4ffe-9147-3230e02250c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.080532144s
STEP: Saw pod success
Dec 28 22:46:32.908: INFO: Pod "downward-api-03ddd7a7-9717-4ffe-9147-3230e02250c7" satisfied condition "success or failure"
Dec 28 22:46:32.914: INFO: Trying to get logs from node jerma-node pod downward-api-03ddd7a7-9717-4ffe-9147-3230e02250c7 container dapi-container: 
STEP: delete the pod
Dec 28 22:46:33.039: INFO: Waiting for pod downward-api-03ddd7a7-9717-4ffe-9147-3230e02250c7 to disappear
Dec 28 22:46:33.046: INFO: Pod downward-api-03ddd7a7-9717-4ffe-9147-3230e02250c7 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:46:33.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1961" for this suite.

• [SLOW TEST:10.350 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":3774,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:46:33.061: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Dec 28 22:46:34.029: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Dec 28 22:46:36.041: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169994, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169994, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169994, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169994, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:46:38.051: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169994, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169994, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169994, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169994, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:46:40.053: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169994, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169994, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169994, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713169994, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Dec 28 22:46:43.088: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Dec 28 22:46:43.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:46:44.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-3638" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:11.497 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":247,"skipped":3812,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:46:44.560: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Dec 28 22:46:44.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4020'
Dec 28 22:46:45.157: INFO: stderr: ""
Dec 28 22:46:45.157: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Dec 28 22:46:45.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4020'
Dec 28 22:46:45.776: INFO: stderr: ""
Dec 28 22:46:45.777: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Dec 28 22:46:46.782: INFO: Selector matched 1 pods for map[app:agnhost]
Dec 28 22:46:46.783: INFO: Found 0 / 1
Dec 28 22:46:47.798: INFO: Selector matched 1 pods for map[app:agnhost]
Dec 28 22:46:47.799: INFO: Found 0 / 1
Dec 28 22:46:48.786: INFO: Selector matched 1 pods for map[app:agnhost]
Dec 28 22:46:48.786: INFO: Found 0 / 1
Dec 28 22:46:49.786: INFO: Selector matched 1 pods for map[app:agnhost]
Dec 28 22:46:49.786: INFO: Found 0 / 1
Dec 28 22:46:50.791: INFO: Selector matched 1 pods for map[app:agnhost]
Dec 28 22:46:50.791: INFO: Found 0 / 1
Dec 28 22:46:51.792: INFO: Selector matched 1 pods for map[app:agnhost]
Dec 28 22:46:51.793: INFO: Found 0 / 1
Dec 28 22:46:52.791: INFO: Selector matched 1 pods for map[app:agnhost]
Dec 28 22:46:52.791: INFO: Found 0 / 1
Dec 28 22:46:53.790: INFO: Selector matched 1 pods for map[app:agnhost]
Dec 28 22:46:53.790: INFO: Found 1 / 1
Dec 28 22:46:53.790: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 28 22:46:53.795: INFO: Selector matched 1 pods for map[app:agnhost]
Dec 28 22:46:53.795: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 28 22:46:53.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-m2c97 --namespace=kubectl-4020'
Dec 28 22:46:53.989: INFO: stderr: ""
Dec 28 22:46:53.989: INFO: stdout: "Name:         agnhost-master-m2c97\nNamespace:    kubectl-4020\nPriority:     0\nNode:         jerma-node/10.96.2.170\nStart Time:   Sat, 28 Dec 2019 22:46:46 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.44.0.1\nIPs:\n  IP:           10.44.0.1\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   docker://213a85e14e53013129b3e5c7c1f6242c596b77a5b54ec62c03ffe23122b5ae7d\n    Image:          gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sat, 28 Dec 2019 22:46:52 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-gklmq (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-gklmq:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-gklmq\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                 Message\n  ----    ------     ----       ----                 -------\n  Normal  Scheduled    default-scheduler    Successfully assigned kubectl-4020/agnhost-master-m2c97 to jerma-node\n  Normal  Pulled     3s         kubelet, jerma-node  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n  Normal  Created    1s         kubelet, jerma-node  Created container agnhost-master\n  Normal  Started    1s         kubelet, jerma-node  Started container agnhost-master\n"
Dec 28 22:46:53.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-4020'
Dec 28 22:46:54.235: INFO: stderr: ""
Dec 28 22:46:54.235: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-4020\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  9s    replication-controller  Created pod: agnhost-master-m2c97\n"
Dec 28 22:46:54.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-4020'
Dec 28 22:46:54.350: INFO: stderr: ""
Dec 28 22:46:54.350: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-4020\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.111.110.137\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Dec 28 22:46:54.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-node'
Dec 28 22:46:54.476: INFO: stderr: ""
Dec 28 22:46:54.476: INFO: stdout: "Name:               jerma-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=jerma-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 12 Oct 2019 13:47:49 +0000\nTaints:             \nUnschedulable:      false\nLease:\n  HolderIdentity:  jerma-node\n  AcquireTime:     \n  RenewTime:       Sat, 28 Dec 2019 22:46:49 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Tue, 17 Dec 2019 21:23:22 +0000   Tue, 17 Dec 2019 21:23:22 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Sat, 28 Dec 2019 22:46:50 +0000   Sat, 12 Oct 2019 13:47:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Sat, 28 Dec 2019 22:46:50 +0000   Sat, 12 Oct 2019 13:47:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Sat, 28 Dec 2019 22:46:50 +0000   Sat, 12 Oct 2019 13:47:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Sat, 28 Dec 2019 22:46:50 +0000   Sat, 12 Oct 2019 13:48:29 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.2.170\n  Hostname:    jerma-node\nCapacity:\n  cpu:                4\n  ephemeral-storage:  20145724Ki\n  hugepages-2Mi:      0\n  memory:             4039076Ki\n  pods:               110\nAllocatable:\n  cpu:                4\n  ephemeral-storage:  18566299208\n  hugepages-2Mi:      0\n  memory:             3936676Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 4eaf1504b38c4046a625a134490a5292\n  System UUID:                4EAF1504-B38C-4046-A625-A134490A5292\n  Boot ID:                    be260572-5100-4207-9fbc-2294735ff8aa\n  Kernel Version:             4.15.0-52-generic\n  OS Image:                   Ubuntu 18.04.2 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  docker://18.9.7\n  Kubelet Version:            v1.16.1\n  Kube-Proxy Version:         v1.16.1\nNon-terminated Pods:          (3 in total)\n  Namespace                   Name                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                    ------------  ----------  ---------------  -------------  ---\n  kube-system                 kube-proxy-jcjl4        0 (0%)        0 (0%)      0 (0%)           0 (0%)         77d\n  kube-system                 weave-net-srfjj         20m (0%)      0 (0%)      0 (0%)           0 (0%)         11d\n  kubectl-4020                agnhost-master-m2c97    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Dec 28 22:46:54.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-4020'
Dec 28 22:46:54.623: INFO: stderr: ""
Dec 28 22:46:54.624: INFO: stdout: "Name:         kubectl-4020\nLabels:       e2e-framework=kubectl\n              e2e-run=c225796b-88cb-41bc-9256-0a10e9f1f399\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:46:54.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4020" for this suite.

• [SLOW TEST:10.092 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":278,"completed":248,"skipped":3844,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:46:54.653: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Dec 28 22:46:54.843: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d9a111c4-b05e-4185-832e-bb003f9eefdf" in namespace "downward-api-5731" to be "success or failure"
Dec 28 22:46:54.850: INFO: Pod "downwardapi-volume-d9a111c4-b05e-4185-832e-bb003f9eefdf": Phase="Pending", Reason="", readiness=false. Elapsed: 5.900697ms
Dec 28 22:46:56.860: INFO: Pod "downwardapi-volume-d9a111c4-b05e-4185-832e-bb003f9eefdf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01677486s
Dec 28 22:46:58.874: INFO: Pod "downwardapi-volume-d9a111c4-b05e-4185-832e-bb003f9eefdf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030768859s
Dec 28 22:47:00.923: INFO: Pod "downwardapi-volume-d9a111c4-b05e-4185-832e-bb003f9eefdf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079673938s
Dec 28 22:47:02.931: INFO: Pod "downwardapi-volume-d9a111c4-b05e-4185-832e-bb003f9eefdf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.087122759s
Dec 28 22:47:05.063: INFO: Pod "downwardapi-volume-d9a111c4-b05e-4185-832e-bb003f9eefdf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.219262405s
STEP: Saw pod success
Dec 28 22:47:05.063: INFO: Pod "downwardapi-volume-d9a111c4-b05e-4185-832e-bb003f9eefdf" satisfied condition "success or failure"
Dec 28 22:47:05.068: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-d9a111c4-b05e-4185-832e-bb003f9eefdf container client-container: 
STEP: delete the pod
Dec 28 22:47:05.725: INFO: Waiting for pod downwardapi-volume-d9a111c4-b05e-4185-832e-bb003f9eefdf to disappear
Dec 28 22:47:05.736: INFO: Pod downwardapi-volume-d9a111c4-b05e-4185-832e-bb003f9eefdf no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:47:05.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5731" for this suite.

• [SLOW TEST:11.093 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":249,"skipped":3848,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:47:05.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-110
[It] should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating statefulset ss in namespace statefulset-110
Dec 28 22:47:05.885: INFO: Found 0 stateful pods, waiting for 1
Dec 28 22:47:15.897: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Dec 28 22:47:15.941: INFO: Deleting all statefulset in ns statefulset-110
Dec 28 22:47:16.019: INFO: Scaling statefulset ss to 0
Dec 28 22:47:36.252: INFO: Waiting for statefulset status.replicas updated to 0
Dec 28 22:47:36.263: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:47:36.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-110" for this suite.

• [SLOW TEST:30.624 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should have a working scale subresource [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":250,"skipped":3858,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:47:36.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-zb5s8 in namespace proxy-7370
I1228 22:47:36.687573       8 runners.go:189] Created replication controller with name: proxy-service-zb5s8, namespace: proxy-7370, replica count: 1
I1228 22:47:37.738743       8 runners.go:189] proxy-service-zb5s8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 22:47:38.739271       8 runners.go:189] proxy-service-zb5s8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 22:47:39.739764       8 runners.go:189] proxy-service-zb5s8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 22:47:40.740265       8 runners.go:189] proxy-service-zb5s8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 22:47:41.740888       8 runners.go:189] proxy-service-zb5s8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 22:47:42.741951       8 runners.go:189] proxy-service-zb5s8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 22:47:43.743127       8 runners.go:189] proxy-service-zb5s8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1228 22:47:44.743663       8 runners.go:189] proxy-service-zb5s8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1228 22:47:45.744157       8 runners.go:189] proxy-service-zb5s8 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 28 22:47:45.751: INFO: setup took 9.21738513s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Dec 28 22:47:45.778: INFO: (0) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:160/proxy/: foo (200; 26.112954ms)
Dec 28 22:47:45.786: INFO: (0) /api/v1/namespaces/proxy-7370/pods/http:proxy-service-zb5s8-2sd8v:160/proxy/: foo (200; 34.534503ms)
Dec 28 22:47:45.786: INFO: (0) /api/v1/namespaces/proxy-7370/services/http:proxy-service-zb5s8:portname1/proxy/: foo (200; 35.183287ms)
Dec 28 22:47:45.787: INFO: (0) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v/proxy/: test (200; 35.909734ms)
Dec 28 22:47:45.787: INFO: (0) /api/v1/namespaces/proxy-7370/services/proxy-service-zb5s8:portname1/proxy/: foo (200; 35.926334ms)
Dec 28 22:47:45.790: INFO: (0) /api/v1/namespaces/proxy-7370/services/proxy-service-zb5s8:portname2/proxy/: bar (200; 38.839844ms)
Dec 28 22:47:45.790: INFO: (0) /api/v1/namespaces/proxy-7370/services/http:proxy-service-zb5s8:portname2/proxy/: bar (200; 38.254142ms)
Dec 28 22:47:45.796: INFO: (0) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:460/proxy/: tls baz (200; 44.868309ms)
Dec 28 22:47:45.796: INFO: (0) /api/v1/namespaces/proxy-7370/pods/http:proxy-service-zb5s8-2sd8v:1080/proxy/: ... (200; 44.268513ms)
Dec 28 22:47:45.796: INFO: (0) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:1080/proxy/: test<... (200; 44.423595ms)
Dec 28 22:47:45.797: INFO: (0) /api/v1/namespaces/proxy-7370/pods/http:proxy-service-zb5s8-2sd8v:162/proxy/: bar (200; 44.552842ms)
Dec 28 22:47:45.797: INFO: (0) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:162/proxy/: bar (200; 45.690869ms)
Dec 28 22:47:45.801: INFO: (0) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:443/proxy/: test<... (200; 28.942304ms)
Dec 28 22:47:45.842: INFO: (1) /api/v1/namespaces/proxy-7370/pods/http:proxy-service-zb5s8-2sd8v:162/proxy/: bar (200; 35.268665ms)
Dec 28 22:47:45.843: INFO: (1) /api/v1/namespaces/proxy-7370/services/http:proxy-service-zb5s8:portname1/proxy/: foo (200; 36.507796ms)
Dec 28 22:47:45.843: INFO: (1) /api/v1/namespaces/proxy-7370/services/proxy-service-zb5s8:portname1/proxy/: foo (200; 36.88133ms)
Dec 28 22:47:45.843: INFO: (1) /api/v1/namespaces/proxy-7370/services/http:proxy-service-zb5s8:portname2/proxy/: bar (200; 37.120765ms)
Dec 28 22:47:45.844: INFO: (1) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v/proxy/: test (200; 37.39876ms)
Dec 28 22:47:45.845: INFO: (1) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:460/proxy/: tls baz (200; 38.75961ms)
Dec 28 22:47:45.845: INFO: (1) /api/v1/namespaces/proxy-7370/pods/http:proxy-service-zb5s8-2sd8v:1080/proxy/: ... (200; 38.812291ms)
Dec 28 22:47:45.846: INFO: (1) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:462/proxy/: tls qux (200; 39.682235ms)
Dec 28 22:47:45.846: INFO: (1) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:160/proxy/: foo (200; 40.210018ms)
Dec 28 22:47:45.846: INFO: (1) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:162/proxy/: bar (200; 39.89697ms)
Dec 28 22:47:45.846: INFO: (1) /api/v1/namespaces/proxy-7370/pods/http:proxy-service-zb5s8-2sd8v:160/proxy/: foo (200; 40.354912ms)
Dec 28 22:47:45.846: INFO: (1) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:443/proxy/: test (200; 20.567709ms)
Dec 28 22:47:45.878: INFO: (2) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:1080/proxy/: test<... (200; 22.056454ms)
Dec 28 22:47:45.879: INFO: (2) /api/v1/namespaces/proxy-7370/pods/http:proxy-service-zb5s8-2sd8v:1080/proxy/: ... (200; 22.412199ms)
Dec 28 22:47:45.879: INFO: (2) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:462/proxy/: tls qux (200; 22.76491ms)
Dec 28 22:47:45.879: INFO: (2) /api/v1/namespaces/proxy-7370/pods/http:proxy-service-zb5s8-2sd8v:160/proxy/: foo (200; 22.469085ms)
Dec 28 22:47:45.882: INFO: (2) /api/v1/namespaces/proxy-7370/services/proxy-service-zb5s8:portname2/proxy/: bar (200; 24.863345ms)
Dec 28 22:47:45.882: INFO: (2) /api/v1/namespaces/proxy-7370/services/https:proxy-service-zb5s8:tlsportname2/proxy/: tls qux (200; 25.572077ms)
Dec 28 22:47:45.882: INFO: (2) /api/v1/namespaces/proxy-7370/services/http:proxy-service-zb5s8:portname2/proxy/: bar (200; 24.956737ms)
Dec 28 22:47:45.882: INFO: (2) /api/v1/namespaces/proxy-7370/services/https:proxy-service-zb5s8:tlsportname1/proxy/: tls baz (200; 25.79791ms)
Dec 28 22:47:45.883: INFO: (2) /api/v1/namespaces/proxy-7370/services/http:proxy-service-zb5s8:portname1/proxy/: foo (200; 26.196674ms)
Dec 28 22:47:45.884: INFO: (2) /api/v1/namespaces/proxy-7370/services/proxy-service-zb5s8:portname1/proxy/: foo (200; 27.993644ms)
Dec 28 22:47:45.885: INFO: (2) /api/v1/namespaces/proxy-7370/pods/http:proxy-service-zb5s8-2sd8v:162/proxy/: bar (200; 27.538385ms)
Dec 28 22:47:45.894: INFO: (3) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v/proxy/: test (200; 9.583832ms)
Dec 28 22:47:45.895: INFO: (3) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:443/proxy/: test<... (200; 19.786772ms)
Dec 28 22:47:45.905: INFO: (3) /api/v1/namespaces/proxy-7370/pods/http:proxy-service-zb5s8-2sd8v:1080/proxy/: ... (200; 19.998674ms)
Dec 28 22:47:45.905: INFO: (3) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:160/proxy/: foo (200; 20.16436ms)
Dec 28 22:47:45.905: INFO: (3) /api/v1/namespaces/proxy-7370/services/http:proxy-service-zb5s8:portname2/proxy/: bar (200; 20.391148ms)
Dec 28 22:47:45.906: INFO: (3) /api/v1/namespaces/proxy-7370/services/http:proxy-service-zb5s8:portname1/proxy/: foo (200; 21.113882ms)
Dec 28 22:47:45.908: INFO: (3) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:162/proxy/: bar (200; 23.406103ms)
Dec 28 22:47:45.909: INFO: (3) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:462/proxy/: tls qux (200; 24.295182ms)
Dec 28 22:47:45.919: INFO: (4) /api/v1/namespaces/proxy-7370/pods/http:proxy-service-zb5s8-2sd8v:162/proxy/: bar (200; 9.912712ms)
Dec 28 22:47:45.919: INFO: (4) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:160/proxy/: foo (200; 8.737194ms)
Dec 28 22:47:45.919: INFO: (4) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:1080/proxy/: test<... (200; 8.900528ms)
Dec 28 22:47:45.919: INFO: (4) /api/v1/namespaces/proxy-7370/pods/http:proxy-service-zb5s8-2sd8v:1080/proxy/: ... (200; 9.164831ms)
Dec 28 22:47:45.920: INFO: (4) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v/proxy/: test (200; 9.675688ms)
Dec 28 22:47:45.920: INFO: (4) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:443/proxy/: test<... (200; 11.606124ms)
Dec 28 22:47:45.937: INFO: (5) /api/v1/namespaces/proxy-7370/pods/http:proxy-service-zb5s8-2sd8v:160/proxy/: foo (200; 11.620919ms)
Dec 28 22:47:45.938: INFO: (5) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:162/proxy/: bar (200; 12.283068ms)
Dec 28 22:47:45.938: INFO: (5) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v/proxy/: test (200; 12.497591ms)
Dec 28 22:47:45.938: INFO: (5) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:460/proxy/: tls baz (200; 13.078433ms)
Dec 28 22:47:45.938: INFO: (5) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:462/proxy/: tls qux (200; 12.838124ms)
Dec 28 22:47:45.938: INFO: (5) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:443/proxy/: ... (200; 13.129079ms)
Dec 28 22:47:45.941: INFO: (5) /api/v1/namespaces/proxy-7370/services/http:proxy-service-zb5s8:portname1/proxy/: foo (200; 15.863033ms)
Dec 28 22:47:45.941: INFO: (5) /api/v1/namespaces/proxy-7370/services/proxy-service-zb5s8:portname1/proxy/: foo (200; 15.816888ms)
Dec 28 22:47:45.941: INFO: (5) /api/v1/namespaces/proxy-7370/services/proxy-service-zb5s8:portname2/proxy/: bar (200; 16.132658ms)
Dec 28 22:47:45.941: INFO: (5) /api/v1/namespaces/proxy-7370/services/https:proxy-service-zb5s8:tlsportname2/proxy/: tls qux (200; 15.984673ms)
Dec 28 22:47:45.942: INFO: (5) /api/v1/namespaces/proxy-7370/services/http:proxy-service-zb5s8:portname2/proxy/: bar (200; 16.450982ms)
Dec 28 22:47:45.942: INFO: (5) /api/v1/namespaces/proxy-7370/services/https:proxy-service-zb5s8:tlsportname1/proxy/: tls baz (200; 16.395097ms)
Dec 28 22:47:45.954: INFO: (6) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v/proxy/: test (200; 11.828348ms)
Dec 28 22:47:45.954: INFO: (6) /api/v1/namespaces/proxy-7370/pods/http:proxy-service-zb5s8-2sd8v:1080/proxy/: ... (200; 11.989873ms)
Dec 28 22:47:45.954: INFO: (6) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:462/proxy/: tls qux (200; 11.947286ms)
Dec 28 22:47:45.955: INFO: (6) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:162/proxy/: bar (200; 13.594214ms)
Dec 28 22:47:45.956: INFO: (6) /api/v1/namespaces/proxy-7370/pods/http:proxy-service-zb5s8-2sd8v:162/proxy/: bar (200; 13.64415ms)
Dec 28 22:47:45.956: INFO: (6) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:460/proxy/: tls baz (200; 13.767736ms)
Dec 28 22:47:45.956: INFO: (6) /api/v1/namespaces/proxy-7370/pods/http:proxy-service-zb5s8-2sd8v:160/proxy/: foo (200; 14.201701ms)
Dec 28 22:47:45.956: INFO: (6) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:160/proxy/: foo (200; 14.329591ms)
Dec 28 22:47:45.958: INFO: (6) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:1080/proxy/: test<... (200; 15.900028ms)
Dec 28 22:47:45.958: INFO: (6) /api/v1/namespaces/proxy-7370/services/proxy-service-zb5s8:portname2/proxy/: bar (200; 16.134914ms)
Dec 28 22:47:45.958: INFO: (6) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:443/proxy/: ... (200; 15.750678ms)
Dec 28 22:47:45.977: INFO: (7) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:460/proxy/: tls baz (200; 15.865141ms)
Dec 28 22:47:45.977: INFO: (7) /api/v1/namespaces/proxy-7370/services/https:proxy-service-zb5s8:tlsportname1/proxy/: tls baz (200; 15.920024ms)
Dec 28 22:47:45.979: INFO: (7) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:162/proxy/: bar (200; 18.587644ms)
Dec 28 22:47:45.980: INFO: (7) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v/proxy/: test (200; 19.46854ms)
Dec 28 22:47:45.980: INFO: (7) /api/v1/namespaces/proxy-7370/services/http:proxy-service-zb5s8:portname1/proxy/: foo (200; 19.570719ms)
Dec 28 22:47:45.980: INFO: (7) /api/v1/namespaces/proxy-7370/services/proxy-service-zb5s8:portname2/proxy/: bar (200; 19.444365ms)
Dec 28 22:47:45.980: INFO: (7) /api/v1/namespaces/proxy-7370/pods/http:proxy-service-zb5s8-2sd8v:160/proxy/: foo (200; 19.53211ms)
Dec 28 22:47:45.980: INFO: (7) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:160/proxy/: foo (200; 19.493613ms)
Dec 28 22:47:45.980: INFO: (7) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:1080/proxy/: test<... (200; 19.536192ms)
Dec 28 22:47:45.980: INFO: (7) /api/v1/namespaces/proxy-7370/services/proxy-service-zb5s8:portname1/proxy/: foo (200; 19.559679ms)
Dec 28 22:47:45.980: INFO: (7) /api/v1/namespaces/proxy-7370/services/http:proxy-service-zb5s8:portname2/proxy/: bar (200; 19.574401ms)
Dec 28 22:47:45.980: INFO: (7) /api/v1/namespaces/proxy-7370/services/https:proxy-service-zb5s8:tlsportname2/proxy/: tls qux (200; 19.71788ms)
Dec 28 22:47:45.980: INFO: (7) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:443/proxy/: test<... (200; 4.278756ms)
Dec 28 22:47:45.988: INFO: (8) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:460/proxy/: tls baz (200; 7.772543ms)
Dec 28 22:47:45.989: INFO: (8) /api/v1/namespaces/proxy-7370/pods/http:proxy-service-zb5s8-2sd8v:160/proxy/: foo (200; 7.734641ms)
Dec 28 22:47:45.989: INFO: (8) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:162/proxy/: bar (200; 8.05869ms)
Dec 28 22:47:45.989: INFO: (8) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:160/proxy/: foo (200; 8.413195ms)
Dec 28 22:47:45.989: INFO: (8) /api/v1/namespaces/proxy-7370/pods/http:proxy-service-zb5s8-2sd8v:162/proxy/: bar (200; 8.40931ms)
Dec 28 22:47:45.989: INFO: (8) /api/v1/namespaces/proxy-7370/pods/http:proxy-service-zb5s8-2sd8v:1080/proxy/: ... (200; 8.536444ms)
Dec 28 22:47:45.989: INFO: (8) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v/proxy/: test (200; 8.312784ms)
Dec 28 22:47:45.989: INFO: (8) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:443/proxy/: test<... (200; 12.477587ms)
Dec 28 22:47:46.012: INFO: (9) /api/v1/namespaces/proxy-7370/services/https:proxy-service-zb5s8:tlsportname2/proxy/: tls qux (200; 13.533165ms)
Dec 28 22:47:46.014: INFO: (9) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:160/proxy/: foo (200; 15.002704ms)
Dec 28 22:47:46.014: INFO: (9) /api/v1/namespaces/proxy-7370/services/proxy-service-zb5s8:portname1/proxy/: foo (200; 15.496318ms)
Dec 28 22:47:46.016: INFO: (9) /api/v1/namespaces/proxy-7370/services/http:proxy-service-zb5s8:portname1/proxy/: foo (200; 17.639966ms)
Dec 28 22:47:46.016: INFO: (9) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v/proxy/: test (200; 17.630712ms)
Dec 28 22:47:46.016: INFO: (9) /api/v1/namespaces/proxy-7370/services/proxy-service-zb5s8:portname2/proxy/: bar (200; 17.683298ms)
Dec 28 22:47:46.016: INFO: (9) /api/v1/namespaces/proxy-7370/pods/http:proxy-service-zb5s8-2sd8v:1080/proxy/: ... (200; 17.703993ms)
Dec 28 22:47:46.016: INFO: (9) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:443/proxy/: test (200; 46.647302ms)
Dec 28 22:47:46.065: INFO: (10) /api/v1/namespaces/proxy-7370/services/proxy-service-zb5s8:portname1/proxy/: foo (200; 47.145652ms)
Dec 28 22:47:46.065: INFO: (10) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:1080/proxy/: test<... (200; 47.420201ms)
Dec 28 22:47:46.066: INFO: (10) /api/v1/namespaces/proxy-7370/pods/http:proxy-service-zb5s8-2sd8v:1080/proxy/: ... (200; 48.214711ms)
Dec 28 22:47:46.066: INFO: (10) /api/v1/namespaces/proxy-7370/services/https:proxy-service-zb5s8:tlsportname1/proxy/: tls baz (200; 48.663819ms)
Dec 28 22:47:46.067: INFO: (10) /api/v1/namespaces/proxy-7370/services/https:proxy-service-zb5s8:tlsportname2/proxy/: tls qux (200; 49.106707ms)
Dec 28 22:47:46.067: INFO: (10) /api/v1/namespaces/proxy-7370/services/http:proxy-service-zb5s8:portname2/proxy/: bar (200; 49.154833ms)
Dec 28 22:47:46.067: INFO: (10) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:160/proxy/: foo (200; 49.135428ms)
Dec 28 22:47:46.070: INFO: (10) /api/v1/namespaces/proxy-7370/services/proxy-service-zb5s8:portname2/proxy/: bar (200; 52.112451ms)
Dec 28 22:47:46.091: INFO: (11) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v/proxy/: test (200; 20.034388ms)
Dec 28 22:47:46.091: INFO: (11) /api/v1/namespaces/proxy-7370/pods/http:proxy-service-zb5s8-2sd8v:160/proxy/: foo (200; 20.420542ms)
Dec 28 22:47:46.092: INFO: (11) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:443/proxy/: test<... (200; 21.892428ms)
Dec 28 22:47:46.094: INFO: (11) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:160/proxy/: foo (200; 22.819791ms)
Dec 28 22:47:46.094: INFO: (11) /api/v1/namespaces/proxy-7370/pods/http:proxy-service-zb5s8-2sd8v:162/proxy/: bar (200; 22.830856ms)
Dec 28 22:47:46.094: INFO: (11) /api/v1/namespaces/proxy-7370/pods/http:proxy-service-zb5s8-2sd8v:1080/proxy/: ... (200; 22.775407ms)
Dec 28 22:47:46.096: INFO: (11) /api/v1/namespaces/proxy-7370/services/http:proxy-service-zb5s8:portname2/proxy/: bar (200; 24.784971ms)
Dec 28 22:47:46.096: INFO: (11) /api/v1/namespaces/proxy-7370/services/proxy-service-zb5s8:portname2/proxy/: bar (200; 25.028874ms)
Dec 28 22:47:46.096: INFO: (11) /api/v1/namespaces/proxy-7370/services/http:proxy-service-zb5s8:portname1/proxy/: foo (200; 25.812084ms)
Dec 28 22:47:46.096: INFO: (11) /api/v1/namespaces/proxy-7370/services/proxy-service-zb5s8:portname1/proxy/: foo (200; 25.244488ms)
Dec 28 22:47:46.096: INFO: (11) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:462/proxy/: tls qux (200; 26.03182ms)
Dec 28 22:47:46.096: INFO: (11) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:460/proxy/: tls baz (200; 25.678235ms)
Dec 28 22:47:46.096: INFO: (11) /api/v1/namespaces/proxy-7370/services/https:proxy-service-zb5s8:tlsportname2/proxy/: tls qux (200; 25.571273ms)
Dec 28 22:47:46.097: INFO: (11) /api/v1/namespaces/proxy-7370/services/https:proxy-service-zb5s8:tlsportname1/proxy/: tls baz (200; 25.987336ms)
Dec 28 22:47:46.107: INFO: (12) /api/v1/namespaces/proxy-7370/pods/http:proxy-service-zb5s8-2sd8v:162/proxy/: bar (200; 10.478136ms)
Dec 28 22:47:46.107: INFO: (12) /api/v1/namespaces/proxy-7370/pods/http:proxy-service-zb5s8-2sd8v:1080/proxy/: ... (200; 10.328558ms)
Dec 28 22:47:46.108: INFO: (12) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:460/proxy/: tls baz (200; 10.429341ms)
Dec 28 22:47:46.108: INFO: (12) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:443/proxy/: test<... (200; 10.927868ms)
Dec 28 22:47:46.108: INFO: (12) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:162/proxy/: bar (200; 10.883629ms)
Dec 28 22:47:46.108: INFO: (12) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v/proxy/: test (200; 10.951299ms)
Dec 28 22:47:46.109: INFO: (12) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:462/proxy/: tls qux (200; 11.655757ms)
Dec 28 22:47:46.109: INFO: (12) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:160/proxy/: foo (200; 11.73752ms)
Dec 28 22:47:46.110: INFO: (12) /api/v1/namespaces/proxy-7370/services/https:proxy-service-zb5s8:tlsportname2/proxy/: tls qux (200; 13.206779ms)
Dec 28 22:47:46.111: INFO: (12) /api/v1/namespaces/proxy-7370/services/https:proxy-service-zb5s8:tlsportname1/proxy/: tls baz (200; 13.272241ms)
Dec 28 22:47:46.111: INFO: (12) /api/v1/namespaces/proxy-7370/services/http:proxy-service-zb5s8:portname1/proxy/: foo (200; 13.760773ms)
Dec 28 22:47:46.111: INFO: (12) /api/v1/namespaces/proxy-7370/services/http:proxy-service-zb5s8:portname2/proxy/: bar (200; 13.916407ms)
Dec 28 22:47:46.112: INFO: (12) /api/v1/namespaces/proxy-7370/services/proxy-service-zb5s8:portname2/proxy/: bar (200; 14.511112ms)
Dec 28 22:47:46.112: INFO: (12) /api/v1/namespaces/proxy-7370/services/proxy-service-zb5s8:portname1/proxy/: foo (200; 14.876573ms)
Dec 28 22:47:46.117: INFO: (13) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:162/proxy/: bar (200; 5.063373ms)
Dec 28 22:47:46.117: INFO: (13) /api/v1/namespaces/proxy-7370/pods/http:proxy-service-zb5s8-2sd8v:162/proxy/: bar (200; 5.245954ms)
Dec 28 22:47:46.118: INFO: (13) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:160/proxy/: foo (200; 5.219782ms)
Dec 28 22:47:46.118: INFO: (13) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:1080/proxy/: test<... (200; 5.300566ms)
Dec 28 22:47:46.119: INFO: (13) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v/proxy/: test (200; 6.192295ms)
Dec 28 22:47:46.121: INFO: (13) /api/v1/namespaces/proxy-7370/services/https:proxy-service-zb5s8:tlsportname1/proxy/: tls baz (200; 8.379592ms)
Dec 28 22:47:46.121: INFO: (13) /api/v1/namespaces/proxy-7370/pods/http:proxy-service-zb5s8-2sd8v:1080/proxy/: ... (200; 8.735367ms)
Dec 28 22:47:46.122: INFO: (13) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:460/proxy/: tls baz (200; 10.175399ms)
Dec 28 22:47:46.123: INFO: (13) /api/v1/namespaces/proxy-7370/services/proxy-service-zb5s8:portname1/proxy/: foo (200; 10.651381ms)
Dec 28 22:47:46.123: INFO: (13) /api/v1/namespaces/proxy-7370/services/proxy-service-zb5s8:portname2/proxy/: bar (200; 10.791586ms)
Dec 28 22:47:46.123: INFO: (13) /api/v1/namespaces/proxy-7370/pods/http:proxy-service-zb5s8-2sd8v:160/proxy/: foo (200; 10.910832ms)
Dec 28 22:47:46.123: INFO: (13) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:443/proxy/: test<... (200; 5.104559ms)
Dec 28 22:47:46.130: INFO: (14) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v/proxy/: test (200; 5.257051ms)
Dec 28 22:47:46.131: INFO: (14) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:460/proxy/: tls baz (200; 6.215543ms)
Dec 28 22:47:46.133: INFO: (14) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:162/proxy/: bar (200; 7.470198ms)
Dec 28 22:47:46.133: INFO: (14) /api/v1/namespaces/proxy-7370/services/http:proxy-service-zb5s8:portname1/proxy/: foo (200; 7.894077ms)
Dec 28 22:47:46.133: INFO: (14) /api/v1/namespaces/proxy-7370/services/http:proxy-service-zb5s8:portname2/proxy/: bar (200; 8.121099ms)
Dec 28 22:47:46.133: INFO: (14) /api/v1/namespaces/proxy-7370/pods/http:proxy-service-zb5s8-2sd8v:162/proxy/: bar (200; 8.106071ms)
Dec 28 22:47:46.133: INFO: (14) /api/v1/namespaces/proxy-7370/pods/http:proxy-service-zb5s8-2sd8v:160/proxy/: foo (200; 8.105944ms)
Dec 28 22:47:46.133: INFO: (14) /api/v1/namespaces/proxy-7370/pods/http:proxy-service-zb5s8-2sd8v:1080/proxy/: ... (200; 8.034751ms)
Dec 28 22:47:46.133: INFO: (14) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:160/proxy/: foo (200; 8.206779ms)
Dec 28 22:47:46.135: INFO: (14) /api/v1/namespaces/proxy-7370/services/proxy-service-zb5s8:portname2/proxy/: bar (200; 9.781766ms)
Dec 28 22:47:46.137: INFO: (14) /api/v1/namespaces/proxy-7370/services/proxy-service-zb5s8:portname1/proxy/: foo (200; 11.846069ms)
Dec 28 22:47:46.137: INFO: (14) /api/v1/namespaces/proxy-7370/services/https:proxy-service-zb5s8:tlsportname2/proxy/: tls qux (200; 12.122372ms)
Dec 28 22:47:46.137: INFO: (14) /api/v1/namespaces/proxy-7370/services/https:proxy-service-zb5s8:tlsportname1/proxy/: tls baz (200; 12.437317ms)
Dec 28 22:47:46.145: INFO: (15) /api/v1/namespaces/proxy-7370/services/proxy-service-zb5s8:portname2/proxy/: bar (200; 7.581443ms)
Dec 28 22:47:46.145: INFO: (15) /api/v1/namespaces/proxy-7370/services/http:proxy-service-zb5s8:portname1/proxy/: foo (200; 7.486062ms)
Dec 28 22:47:46.145: INFO: (15) /api/v1/namespaces/proxy-7370/services/http:proxy-service-zb5s8:portname2/proxy/: bar (200; 7.539569ms)
Dec 28 22:47:46.145: INFO: (15) /api/v1/namespaces/proxy-7370/services/https:proxy-service-zb5s8:tlsportname1/proxy/: tls baz (200; 7.494692ms)
Dec 28 22:47:46.145: INFO: (15) /api/v1/namespaces/proxy-7370/services/https:proxy-service-zb5s8:tlsportname2/proxy/: tls qux (200; 7.612114ms)
Dec 28 22:47:46.146: INFO: (15) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:160/proxy/: foo (200; 8.02754ms)
Dec 28 22:47:46.146: INFO: (15) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:443/proxy/: ... (200; 9.792234ms)
Dec 28 22:47:46.148: INFO: (15) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:460/proxy/: tls baz (200; 10.021718ms)
Dec 28 22:47:46.148: INFO: (15) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:462/proxy/: tls qux (200; 10.102385ms)
Dec 28 22:47:46.148: INFO: (15) /api/v1/namespaces/proxy-7370/pods/http:proxy-service-zb5s8-2sd8v:160/proxy/: foo (200; 10.066769ms)
Dec 28 22:47:46.148: INFO: (15) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:1080/proxy/: test<... (200; 10.078479ms)
Dec 28 22:47:46.148: INFO: (15) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v/proxy/: test (200; 10.244119ms)
Dec 28 22:47:46.157: INFO: (16) /api/v1/namespaces/proxy-7370/pods/http:proxy-service-zb5s8-2sd8v:162/proxy/: bar (200; 9.504851ms)
Dec 28 22:47:46.157: INFO: (16) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:160/proxy/: foo (200; 9.433835ms)
Dec 28 22:47:46.158: INFO: (16) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:1080/proxy/: test<... (200; 10.264578ms)
Dec 28 22:47:46.159: INFO: (16) /api/v1/namespaces/proxy-7370/pods/http:proxy-service-zb5s8-2sd8v:160/proxy/: foo (200; 11.071978ms)
Dec 28 22:47:46.159: INFO: (16) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:460/proxy/: tls baz (200; 11.152078ms)
Dec 28 22:47:46.159: INFO: (16) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:462/proxy/: tls qux (200; 11.06179ms)
Dec 28 22:47:46.159: INFO: (16) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:443/proxy/: test (200; 11.026235ms)
Dec 28 22:47:46.159: INFO: (16) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:162/proxy/: bar (200; 11.039989ms)
Dec 28 22:47:46.159: INFO: (16) /api/v1/namespaces/proxy-7370/pods/http:proxy-service-zb5s8-2sd8v:1080/proxy/: ... (200; 11.095055ms)
Dec 28 22:47:46.160: INFO: (16) /api/v1/namespaces/proxy-7370/services/http:proxy-service-zb5s8:portname1/proxy/: foo (200; 11.925981ms)
Dec 28 22:47:46.160: INFO: (16) /api/v1/namespaces/proxy-7370/services/https:proxy-service-zb5s8:tlsportname1/proxy/: tls baz (200; 11.853171ms)
Dec 28 22:47:46.160: INFO: (16) /api/v1/namespaces/proxy-7370/services/https:proxy-service-zb5s8:tlsportname2/proxy/: tls qux (200; 11.989276ms)
Dec 28 22:47:46.160: INFO: (16) /api/v1/namespaces/proxy-7370/services/proxy-service-zb5s8:portname1/proxy/: foo (200; 11.943654ms)
Dec 28 22:47:46.160: INFO: (16) /api/v1/namespaces/proxy-7370/services/proxy-service-zb5s8:portname2/proxy/: bar (200; 12.014969ms)
Dec 28 22:47:46.160: INFO: (16) /api/v1/namespaces/proxy-7370/services/http:proxy-service-zb5s8:portname2/proxy/: bar (200; 12.415025ms)
Dec 28 22:47:46.168: INFO: (17) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:160/proxy/: foo (200; 7.085476ms)
Dec 28 22:47:46.168: INFO: (17) /api/v1/namespaces/proxy-7370/services/http:proxy-service-zb5s8:portname1/proxy/: foo (200; 7.371337ms)
Dec 28 22:47:46.168: INFO: (17) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:462/proxy/: tls qux (200; 7.267824ms)
Dec 28 22:47:46.168: INFO: (17) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v/proxy/: test (200; 7.449926ms)
Dec 28 22:47:46.168: INFO: (17) /api/v1/namespaces/proxy-7370/pods/http:proxy-service-zb5s8-2sd8v:1080/proxy/: ... (200; 7.407876ms)
Dec 28 22:47:46.168: INFO: (17) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:162/proxy/: bar (200; 7.806682ms)
Dec 28 22:47:46.169: INFO: (17) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:460/proxy/: tls baz (200; 8.659285ms)
Dec 28 22:47:46.169: INFO: (17) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:443/proxy/: test<... (200; 9.514167ms)
Dec 28 22:47:46.170: INFO: (17) /api/v1/namespaces/proxy-7370/services/https:proxy-service-zb5s8:tlsportname2/proxy/: tls qux (200; 9.854954ms)
Dec 28 22:47:46.170: INFO: (17) /api/v1/namespaces/proxy-7370/services/proxy-service-zb5s8:portname2/proxy/: bar (200; 9.873939ms)
Dec 28 22:47:46.171: INFO: (17) /api/v1/namespaces/proxy-7370/services/proxy-service-zb5s8:portname1/proxy/: foo (200; 10.24815ms)
Dec 28 22:47:46.171: INFO: (17) /api/v1/namespaces/proxy-7370/pods/http:proxy-service-zb5s8-2sd8v:160/proxy/: foo (200; 10.699435ms)
Dec 28 22:47:46.171: INFO: (17) /api/v1/namespaces/proxy-7370/services/https:proxy-service-zb5s8:tlsportname1/proxy/: tls baz (200; 10.949302ms)
Dec 28 22:47:46.173: INFO: (17) /api/v1/namespaces/proxy-7370/services/http:proxy-service-zb5s8:portname2/proxy/: bar (200; 12.255848ms)
Dec 28 22:47:46.181: INFO: (18) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:462/proxy/: tls qux (200; 8.19948ms)
Dec 28 22:47:46.181: INFO: (18) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:443/proxy/: ... (200; 8.899868ms)
Dec 28 22:47:46.182: INFO: (18) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:162/proxy/: bar (200; 8.968172ms)
Dec 28 22:47:46.182: INFO: (18) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:1080/proxy/: test<... (200; 9.581491ms)
Dec 28 22:47:46.182: INFO: (18) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v/proxy/: test (200; 9.72596ms)
Dec 28 22:47:46.185: INFO: (18) /api/v1/namespaces/proxy-7370/services/https:proxy-service-zb5s8:tlsportname1/proxy/: tls baz (200; 11.709701ms)
Dec 28 22:47:46.185: INFO: (18) /api/v1/namespaces/proxy-7370/services/http:proxy-service-zb5s8:portname2/proxy/: bar (200; 11.90049ms)
Dec 28 22:47:46.185: INFO: (18) /api/v1/namespaces/proxy-7370/services/http:proxy-service-zb5s8:portname1/proxy/: foo (200; 12.069808ms)
Dec 28 22:47:46.185: INFO: (18) /api/v1/namespaces/proxy-7370/services/https:proxy-service-zb5s8:tlsportname2/proxy/: tls qux (200; 12.294352ms)
Dec 28 22:47:46.185: INFO: (18) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:460/proxy/: tls baz (200; 12.324208ms)
Dec 28 22:47:46.185: INFO: (18) /api/v1/namespaces/proxy-7370/services/proxy-service-zb5s8:portname2/proxy/: bar (200; 12.481244ms)
Dec 28 22:47:46.186: INFO: (18) /api/v1/namespaces/proxy-7370/services/proxy-service-zb5s8:portname1/proxy/: foo (200; 12.73705ms)
Dec 28 22:47:46.189: INFO: (19) /api/v1/namespaces/proxy-7370/pods/http:proxy-service-zb5s8-2sd8v:1080/proxy/: ... (200; 3.044853ms)
Dec 28 22:47:46.190: INFO: (19) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v/proxy/: test (200; 4.241224ms)
Dec 28 22:47:46.193: INFO: (19) /api/v1/namespaces/proxy-7370/pods/http:proxy-service-zb5s8-2sd8v:160/proxy/: foo (200; 7.261111ms)
Dec 28 22:47:46.194: INFO: (19) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:1080/proxy/: test<... (200; 7.940694ms)
Dec 28 22:47:46.194: INFO: (19) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:162/proxy/: bar (200; 8.126803ms)
Dec 28 22:47:46.194: INFO: (19) /api/v1/namespaces/proxy-7370/pods/proxy-service-zb5s8-2sd8v:160/proxy/: foo (200; 8.104947ms)
Dec 28 22:47:46.194: INFO: (19) /api/v1/namespaces/proxy-7370/services/https:proxy-service-zb5s8:tlsportname1/proxy/: tls baz (200; 8.29861ms)
Dec 28 22:47:46.194: INFO: (19) /api/v1/namespaces/proxy-7370/pods/https:proxy-service-zb5s8-2sd8v:443/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1877
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Dec 28 22:47:51.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-9759'
Dec 28 22:47:51.436: INFO: stderr: ""
Dec 28 22:47:51.436: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Dec 28 22:48:01.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-9759 -o json'
Dec 28 22:48:01.707: INFO: stderr: ""
Dec 28 22:48:01.708: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2019-12-28T22:47:51Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-9759\",\n        \"resourceVersion\": \"10443540\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-9759/pods/e2e-test-httpd-pod\",\n        \"uid\": \"f4b22d75-d166-45f2-9489-fb904a17ff1e\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-5r5dr\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"jerma-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-5r5dr\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-5r5dr\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-28T22:47:51Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-28T22:47:57Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-28T22:47:57Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-28T22:47:51Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://45b02e0e366eca408918c305ce79675debc24281aa4cd51e288ecbc2c9e01469\",\n                \"image\": \"httpd:2.4.38-alpine\",\n                \"imageID\": \"docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2019-12-28T22:47:56Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.2.170\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.44.0.1\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2019-12-28T22:47:51Z\"\n    }\n}\n"
STEP: replace the image in the pod
Dec 28 22:48:01.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-9759'
Dec 28 22:48:02.167: INFO: stderr: ""
Dec 28 22:48:02.167: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1882
Dec 28 22:48:02.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-9759'
Dec 28 22:48:08.023: INFO: stderr: ""
Dec 28 22:48:08.023: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:48:08.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9759" for this suite.

• [SLOW TEST:16.854 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1873
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":278,"completed":252,"skipped":3925,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:48:08.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Dec 28 22:48:08.235: INFO: Created pod &Pod{ObjectMeta:{dns-2120  dns-2120 /api/v1/namespaces/dns-2120/pods/dns-2120 bf834306-b8bf-4946-b596-ecbf9c9887bc 10443569 0 2019-12-28 22:48:08 +0000 UTC   map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v29j6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v29j6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v29j6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: Verifying customized DNS suffix list is configured on pod...
Dec 28 22:48:16.303: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-2120 PodName:dns-2120 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 22:48:16.303: INFO: >>> kubeConfig: /root/.kube/config
STEP: Verifying customized DNS server is configured on pod...
Dec 28 22:48:16.622: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-2120 PodName:dns-2120 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 22:48:16.622: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 22:48:16.860: INFO: Deleting pod dns-2120...
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:48:16.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2120" for this suite.

• [SLOW TEST:8.882 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":253,"skipped":3947,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:48:16.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-3335dd20-7f8b-4d4b-82b1-6f8cca5ed535
STEP: Creating a pod to test consume secrets
Dec 28 22:48:17.140: INFO: Waiting up to 5m0s for pod "pod-secrets-4e128bfd-f4ee-42f1-a83c-99325fcb8338" in namespace "secrets-7560" to be "success or failure"
Dec 28 22:48:17.159: INFO: Pod "pod-secrets-4e128bfd-f4ee-42f1-a83c-99325fcb8338": Phase="Pending", Reason="", readiness=false. Elapsed: 19.014266ms
Dec 28 22:48:19.167: INFO: Pod "pod-secrets-4e128bfd-f4ee-42f1-a83c-99325fcb8338": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026341987s
Dec 28 22:48:21.172: INFO: Pod "pod-secrets-4e128bfd-f4ee-42f1-a83c-99325fcb8338": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031646017s
Dec 28 22:48:23.179: INFO: Pod "pod-secrets-4e128bfd-f4ee-42f1-a83c-99325fcb8338": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038419767s
Dec 28 22:48:25.225: INFO: Pod "pod-secrets-4e128bfd-f4ee-42f1-a83c-99325fcb8338": Phase="Pending", Reason="", readiness=false. Elapsed: 8.084627993s
Dec 28 22:48:27.234: INFO: Pod "pod-secrets-4e128bfd-f4ee-42f1-a83c-99325fcb8338": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.093281607s
STEP: Saw pod success
Dec 28 22:48:27.234: INFO: Pod "pod-secrets-4e128bfd-f4ee-42f1-a83c-99325fcb8338" satisfied condition "success or failure"
Dec 28 22:48:27.238: INFO: Trying to get logs from node jerma-node pod pod-secrets-4e128bfd-f4ee-42f1-a83c-99325fcb8338 container secret-env-test: 
STEP: delete the pod
Dec 28 22:48:27.345: INFO: Waiting for pod pod-secrets-4e128bfd-f4ee-42f1-a83c-99325fcb8338 to disappear
Dec 28 22:48:27.376: INFO: Pod pod-secrets-4e128bfd-f4ee-42f1-a83c-99325fcb8338 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:48:27.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7560" for this suite.

• [SLOW TEST:10.467 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":254,"skipped":3980,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:48:27.389: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6638.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-6638.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6638.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6638.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6638.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-6638.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6638.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-6638.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6638.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6638.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-6638.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6638.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-6638.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6638.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-6638.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6638.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-6638.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6638.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 28 22:48:39.657: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6638.svc.cluster.local from pod dns-6638/dns-test-e8497a75-1a54-4bba-83ba-480ec565ea2e: the server could not find the requested resource (get pods dns-test-e8497a75-1a54-4bba-83ba-480ec565ea2e)
Dec 28 22:48:39.669: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6638.svc.cluster.local from pod dns-6638/dns-test-e8497a75-1a54-4bba-83ba-480ec565ea2e: the server could not find the requested resource (get pods dns-test-e8497a75-1a54-4bba-83ba-480ec565ea2e)
Dec 28 22:48:39.676: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6638.svc.cluster.local from pod dns-6638/dns-test-e8497a75-1a54-4bba-83ba-480ec565ea2e: the server could not find the requested resource (get pods dns-test-e8497a75-1a54-4bba-83ba-480ec565ea2e)
Dec 28 22:48:39.685: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6638.svc.cluster.local from pod dns-6638/dns-test-e8497a75-1a54-4bba-83ba-480ec565ea2e: the server could not find the requested resource (get pods dns-test-e8497a75-1a54-4bba-83ba-480ec565ea2e)
Dec 28 22:48:39.689: INFO: Unable to read wheezy_udp@PodARecord from pod dns-6638/dns-test-e8497a75-1a54-4bba-83ba-480ec565ea2e: the server could not find the requested resource (get pods dns-test-e8497a75-1a54-4bba-83ba-480ec565ea2e)
Dec 28 22:48:39.695: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-6638/dns-test-e8497a75-1a54-4bba-83ba-480ec565ea2e: the server could not find the requested resource (get pods dns-test-e8497a75-1a54-4bba-83ba-480ec565ea2e)
Dec 28 22:48:39.701: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6638.svc.cluster.local from pod dns-6638/dns-test-e8497a75-1a54-4bba-83ba-480ec565ea2e: the server could not find the requested resource (get pods dns-test-e8497a75-1a54-4bba-83ba-480ec565ea2e)
Dec 28 22:48:39.706: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6638.svc.cluster.local from pod dns-6638/dns-test-e8497a75-1a54-4bba-83ba-480ec565ea2e: the server could not find the requested resource (get pods dns-test-e8497a75-1a54-4bba-83ba-480ec565ea2e)
Dec 28 22:48:39.711: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6638.svc.cluster.local from pod dns-6638/dns-test-e8497a75-1a54-4bba-83ba-480ec565ea2e: the server could not find the requested resource (get pods dns-test-e8497a75-1a54-4bba-83ba-480ec565ea2e)
Dec 28 22:48:39.716: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6638.svc.cluster.local from pod dns-6638/dns-test-e8497a75-1a54-4bba-83ba-480ec565ea2e: the server could not find the requested resource (get pods dns-test-e8497a75-1a54-4bba-83ba-480ec565ea2e)
Dec 28 22:48:39.720: INFO: Unable to read jessie_udp@PodARecord from pod dns-6638/dns-test-e8497a75-1a54-4bba-83ba-480ec565ea2e: the server could not find the requested resource (get pods dns-test-e8497a75-1a54-4bba-83ba-480ec565ea2e)
Dec 28 22:48:39.726: INFO: Unable to read jessie_tcp@PodARecord from pod dns-6638/dns-test-e8497a75-1a54-4bba-83ba-480ec565ea2e: the server could not find the requested resource (get pods dns-test-e8497a75-1a54-4bba-83ba-480ec565ea2e)
Dec 28 22:48:39.726: INFO: Lookups using dns-6638/dns-test-e8497a75-1a54-4bba-83ba-480ec565ea2e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6638.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6638.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6638.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6638.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@dns-querier-2.dns-test-service-2.dns-6638.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6638.svc.cluster.local jessie_udp@dns-test-service-2.dns-6638.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6638.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Dec 28 22:48:44.787: INFO: DNS probes using dns-6638/dns-test-e8497a75-1a54-4bba-83ba-480ec565ea2e succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:48:44.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6638" for this suite.

• [SLOW TEST:17.613 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":255,"skipped":3980,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:48:45.004: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-7925
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-7925
I1228 22:48:45.242944       8 runners.go:189] Created replication controller with name: externalname-service, namespace: services-7925, replica count: 2
I1228 22:48:48.293980       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 22:48:51.294401       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 22:48:54.294795       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 28 22:48:54.294: INFO: Creating new exec pod
Dec 28 22:49:03.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7925 execpodkc8dd -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Dec 28 22:49:04.095: INFO: stderr: "+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n"
Dec 28 22:49:04.095: INFO: stdout: ""
Dec 28 22:49:04.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7925 execpodkc8dd -- /bin/sh -x -c nc -zv -t -w 2 10.104.242.227 80'
Dec 28 22:49:04.590: INFO: stderr: "+ nc -zv -t -w 2 10.104.242.227 80\nConnection to 10.104.242.227 80 port [tcp/http] succeeded!\n"
Dec 28 22:49:04.591: INFO: stdout: ""
Dec 28 22:49:04.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7925 execpodkc8dd -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.170 32432'
Dec 28 22:49:04.958: INFO: stderr: "+ nc -zv -t -w 2 10.96.2.170 32432\nConnection to 10.96.2.170 32432 port [tcp/32432] succeeded!\n"
Dec 28 22:49:04.958: INFO: stdout: ""
Dec 28 22:49:04.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7925 execpodkc8dd -- /bin/sh -x -c nc -zv -t -w 2 10.96.3.35 32432'
Dec 28 22:49:05.328: INFO: stderr: "+ nc -zv -t -w 2 10.96.3.35 32432\nConnection to 10.96.3.35 32432 port [tcp/32432] succeeded!\n"
Dec 28 22:49:05.329: INFO: stdout: ""
Dec 28 22:49:05.329: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:49:05.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7925" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:20.451 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":256,"skipped":3991,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:49:05.456: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Dec 28 22:49:05.558: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a053180b-bfb8-4694-82a1-6442756c9aba" in namespace "projected-3536" to be "success or failure"
Dec 28 22:49:05.576: INFO: Pod "downwardapi-volume-a053180b-bfb8-4694-82a1-6442756c9aba": Phase="Pending", Reason="", readiness=false. Elapsed: 18.036507ms
Dec 28 22:49:07.584: INFO: Pod "downwardapi-volume-a053180b-bfb8-4694-82a1-6442756c9aba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026472162s
Dec 28 22:49:09.592: INFO: Pod "downwardapi-volume-a053180b-bfb8-4694-82a1-6442756c9aba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034195372s
Dec 28 22:49:11.609: INFO: Pod "downwardapi-volume-a053180b-bfb8-4694-82a1-6442756c9aba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050990957s
Dec 28 22:49:13.618: INFO: Pod "downwardapi-volume-a053180b-bfb8-4694-82a1-6442756c9aba": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059778827s
Dec 28 22:49:15.624: INFO: Pod "downwardapi-volume-a053180b-bfb8-4694-82a1-6442756c9aba": Phase="Pending", Reason="", readiness=false. Elapsed: 10.066471572s
Dec 28 22:49:17.631: INFO: Pod "downwardapi-volume-a053180b-bfb8-4694-82a1-6442756c9aba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.073410087s
STEP: Saw pod success
Dec 28 22:49:17.632: INFO: Pod "downwardapi-volume-a053180b-bfb8-4694-82a1-6442756c9aba" satisfied condition "success or failure"
Dec 28 22:49:17.636: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-a053180b-bfb8-4694-82a1-6442756c9aba container client-container: 
STEP: delete the pod
Dec 28 22:49:17.669: INFO: Waiting for pod downwardapi-volume-a053180b-bfb8-4694-82a1-6442756c9aba to disappear
Dec 28 22:49:17.672: INFO: Pod downwardapi-volume-a053180b-bfb8-4694-82a1-6442756c9aba no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:49:17.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3536" for this suite.

• [SLOW TEST:12.226 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":257,"skipped":4007,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:49:17.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Dec 28 22:49:17.890: INFO: Waiting up to 5m0s for pod "downward-api-6c41213c-82dd-4009-826f-c0b7a34d55e5" in namespace "downward-api-1191" to be "success or failure"
Dec 28 22:49:17.915: INFO: Pod "downward-api-6c41213c-82dd-4009-826f-c0b7a34d55e5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.567067ms
Dec 28 22:49:19.932: INFO: Pod "downward-api-6c41213c-82dd-4009-826f-c0b7a34d55e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042011937s
Dec 28 22:49:21.959: INFO: Pod "downward-api-6c41213c-82dd-4009-826f-c0b7a34d55e5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068564924s
Dec 28 22:49:23.968: INFO: Pod "downward-api-6c41213c-82dd-4009-826f-c0b7a34d55e5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077651696s
Dec 28 22:49:25.985: INFO: Pod "downward-api-6c41213c-82dd-4009-826f-c0b7a34d55e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.094486238s
STEP: Saw pod success
Dec 28 22:49:25.985: INFO: Pod "downward-api-6c41213c-82dd-4009-826f-c0b7a34d55e5" satisfied condition "success or failure"
Dec 28 22:49:26.020: INFO: Trying to get logs from node jerma-node pod downward-api-6c41213c-82dd-4009-826f-c0b7a34d55e5 container dapi-container: 
STEP: delete the pod
Dec 28 22:49:26.131: INFO: Waiting for pod downward-api-6c41213c-82dd-4009-826f-c0b7a34d55e5 to disappear
Dec 28 22:49:26.139: INFO: Pod downward-api-6c41213c-82dd-4009-826f-c0b7a34d55e5 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:49:26.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1191" for this suite.

• [SLOW TEST:8.475 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":258,"skipped":4058,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:49:26.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-13f8d9ae-870f-4387-9cff-ff6cc7a8fb9d
STEP: Creating a pod to test consume configMaps
Dec 28 22:49:26.326: INFO: Waiting up to 5m0s for pod "pod-configmaps-587821d1-0a95-4e1d-b50d-f885f376cf89" in namespace "configmap-9133" to be "success or failure"
Dec 28 22:49:26.407: INFO: Pod "pod-configmaps-587821d1-0a95-4e1d-b50d-f885f376cf89": Phase="Pending", Reason="", readiness=false. Elapsed: 80.6404ms
Dec 28 22:49:28.416: INFO: Pod "pod-configmaps-587821d1-0a95-4e1d-b50d-f885f376cf89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089397256s
Dec 28 22:49:30.424: INFO: Pod "pod-configmaps-587821d1-0a95-4e1d-b50d-f885f376cf89": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097279897s
Dec 28 22:49:32.433: INFO: Pod "pod-configmaps-587821d1-0a95-4e1d-b50d-f885f376cf89": Phase="Pending", Reason="", readiness=false. Elapsed: 6.106369346s
Dec 28 22:49:34.459: INFO: Pod "pod-configmaps-587821d1-0a95-4e1d-b50d-f885f376cf89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.132865125s
STEP: Saw pod success
Dec 28 22:49:34.460: INFO: Pod "pod-configmaps-587821d1-0a95-4e1d-b50d-f885f376cf89" satisfied condition "success or failure"
Dec 28 22:49:34.467: INFO: Trying to get logs from node jerma-node pod pod-configmaps-587821d1-0a95-4e1d-b50d-f885f376cf89 container configmap-volume-test: 
STEP: delete the pod
Dec 28 22:49:34.628: INFO: Waiting for pod pod-configmaps-587821d1-0a95-4e1d-b50d-f885f376cf89 to disappear
Dec 28 22:49:34.640: INFO: Pod pod-configmaps-587821d1-0a95-4e1d-b50d-f885f376cf89 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:49:34.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9133" for this suite.

• [SLOW TEST:8.519 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":259,"skipped":4064,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:49:34.681: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Dec 28 22:49:53.118: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2080 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 22:49:53.118: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 22:49:53.301: INFO: Exec stderr: ""
Dec 28 22:49:53.302: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2080 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 22:49:53.302: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 22:49:53.471: INFO: Exec stderr: ""
Dec 28 22:49:53.471: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2080 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 22:49:53.471: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 22:49:53.686: INFO: Exec stderr: ""
Dec 28 22:49:53.686: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2080 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 22:49:53.687: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 22:49:53.984: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Dec 28 22:49:53.985: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2080 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 22:49:53.985: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 22:49:54.241: INFO: Exec stderr: ""
Dec 28 22:49:54.241: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2080 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 22:49:54.241: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 22:49:54.452: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Dec 28 22:49:54.452: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2080 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 22:49:54.452: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 22:49:54.694: INFO: Exec stderr: ""
Dec 28 22:49:54.694: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2080 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 22:49:54.694: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 22:49:54.894: INFO: Exec stderr: ""
Dec 28 22:49:54.895: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2080 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 22:49:54.895: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 22:49:55.152: INFO: Exec stderr: ""
Dec 28 22:49:55.152: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2080 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 22:49:55.152: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 22:49:55.374: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:49:55.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-2080" for this suite.

• [SLOW TEST:20.730 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":260,"skipped":4138,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:49:55.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W1228 22:50:36.432520       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 28 22:50:36.432: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:50:36.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5771" for this suite.

• [SLOW TEST:41.035 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":261,"skipped":4151,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:50:36.447: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Dec 28 22:50:36.661: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aa80f1c0-347d-4748-a499-e2482c58f103" in namespace "downward-api-9576" to be "success or failure"
Dec 28 22:50:36.669: INFO: Pod "downwardapi-volume-aa80f1c0-347d-4748-a499-e2482c58f103": Phase="Pending", Reason="", readiness=false. Elapsed: 8.507643ms
Dec 28 22:50:38.720: INFO: Pod "downwardapi-volume-aa80f1c0-347d-4748-a499-e2482c58f103": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058977313s
Dec 28 22:50:40.727: INFO: Pod "downwardapi-volume-aa80f1c0-347d-4748-a499-e2482c58f103": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065785614s
Dec 28 22:50:42.754: INFO: Pod "downwardapi-volume-aa80f1c0-347d-4748-a499-e2482c58f103": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093347043s
Dec 28 22:50:45.226: INFO: Pod "downwardapi-volume-aa80f1c0-347d-4748-a499-e2482c58f103": Phase="Pending", Reason="", readiness=false. Elapsed: 8.565740995s
Dec 28 22:50:48.329: INFO: Pod "downwardapi-volume-aa80f1c0-347d-4748-a499-e2482c58f103": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.668591513s
STEP: Saw pod success
Dec 28 22:50:48.330: INFO: Pod "downwardapi-volume-aa80f1c0-347d-4748-a499-e2482c58f103" satisfied condition "success or failure"
Dec 28 22:50:48.719: INFO: Trying to get logs from node jerma-server-4b75xjbddvit pod downwardapi-volume-aa80f1c0-347d-4748-a499-e2482c58f103 container client-container: 
STEP: delete the pod
Dec 28 22:50:49.504: INFO: Waiting for pod downwardapi-volume-aa80f1c0-347d-4748-a499-e2482c58f103 to disappear
Dec 28 22:50:50.489: INFO: Pod downwardapi-volume-aa80f1c0-347d-4748-a499-e2482c58f103 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:50:50.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9576" for this suite.

• [SLOW TEST:14.711 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4152,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:50:51.158: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 28 22:50:53.043: INFO: Waiting up to 5m0s for pod "pod-12eb798b-3189-4c6a-98f4-00c1ebc57284" in namespace "emptydir-4742" to be "success or failure"
Dec 28 22:50:53.347: INFO: Pod "pod-12eb798b-3189-4c6a-98f4-00c1ebc57284": Phase="Pending", Reason="", readiness=false. Elapsed: 302.779984ms
Dec 28 22:50:55.356: INFO: Pod "pod-12eb798b-3189-4c6a-98f4-00c1ebc57284": Phase="Pending", Reason="", readiness=false. Elapsed: 2.311943825s
Dec 28 22:50:57.373: INFO: Pod "pod-12eb798b-3189-4c6a-98f4-00c1ebc57284": Phase="Pending", Reason="", readiness=false. Elapsed: 4.32902043s
Dec 28 22:50:59.385: INFO: Pod "pod-12eb798b-3189-4c6a-98f4-00c1ebc57284": Phase="Pending", Reason="", readiness=false. Elapsed: 6.340706678s
Dec 28 22:51:01.396: INFO: Pod "pod-12eb798b-3189-4c6a-98f4-00c1ebc57284": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.351920246s
STEP: Saw pod success
Dec 28 22:51:01.396: INFO: Pod "pod-12eb798b-3189-4c6a-98f4-00c1ebc57284" satisfied condition "success or failure"
Dec 28 22:51:01.400: INFO: Trying to get logs from node jerma-node pod pod-12eb798b-3189-4c6a-98f4-00c1ebc57284 container test-container: 
STEP: delete the pod
Dec 28 22:51:01.857: INFO: Waiting for pod pod-12eb798b-3189-4c6a-98f4-00c1ebc57284 to disappear
Dec 28 22:51:01.872: INFO: Pod pod-12eb798b-3189-4c6a-98f4-00c1ebc57284 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:51:01.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4742" for this suite.

• [SLOW TEST:10.741 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4153,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:51:01.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-8af60547-aa71-4de1-a504-91e2fcc1a2ae
STEP: Creating configMap with name cm-test-opt-upd-c74a9d35-8928-4a57-a706-d7ba00000453
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-8af60547-aa71-4de1-a504-91e2fcc1a2ae
STEP: Updating configmap cm-test-opt-upd-c74a9d35-8928-4a57-a706-d7ba00000453
STEP: Creating configMap with name cm-test-opt-create-e1a06ab8-09dd-4ac0-835b-34a3323dd6ed
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:52:33.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6184" for this suite.

• [SLOW TEST:91.488 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":264,"skipped":4199,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:52:33.391: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service nodeport-service with the type=NodePort in namespace services-6020
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-6020
STEP: creating replication controller externalsvc in namespace services-6020
I1228 22:52:33.673406       8 runners.go:189] Created replication controller with name: externalsvc, namespace: services-6020, replica count: 2
I1228 22:52:36.724959       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 22:52:39.725711       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 22:52:42.726164       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 22:52:45.726822       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Dec 28 22:52:45.838: INFO: Creating new exec pod
Dec 28 22:52:53.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6020 execpodqd5fr -- /bin/sh -x -c nslookup nodeport-service'
Dec 28 22:52:54.350: INFO: stderr: "+ nslookup nodeport-service\n"
Dec 28 22:52:54.350: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-6020.svc.cluster.local\tcanonical name = externalsvc.services-6020.svc.cluster.local.\nName:\texternalsvc.services-6020.svc.cluster.local\nAddress: 10.110.196.52\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-6020, will wait for the garbage collector to delete the pods
Dec 28 22:52:54.414: INFO: Deleting ReplicationController externalsvc took: 6.930908ms
Dec 28 22:52:54.715: INFO: Terminating ReplicationController externalsvc pods took: 300.659061ms
Dec 28 22:53:06.846: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:53:06.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6020" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:33.522 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":265,"skipped":4208,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:53:06.914: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Dec 28 22:53:19.112: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Dec 28 22:53:29.315: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:53:29.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9558" for this suite.

• [SLOW TEST:22.430 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":266,"skipped":4223,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:53:29.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Dec 28 22:53:38.514: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:53:38.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-8319" for this suite.

• [SLOW TEST:9.316 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":267,"skipped":4254,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:53:38.664: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-9374
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-9374
I1228 22:53:38.975293       8 runners.go:189] Created replication controller with name: externalname-service, namespace: services-9374, replica count: 2
I1228 22:53:42.026729       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 22:53:45.027604       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 22:53:48.028157       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 22:53:51.028643       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 22:53:54.029600       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 28 22:53:54.030: INFO: Creating new exec pod
Dec 28 22:54:03.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9374 execpodmjczz -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Dec 28 22:54:06.172: INFO: stderr: "+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n"
Dec 28 22:54:06.172: INFO: stdout: ""
Dec 28 22:54:06.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9374 execpodmjczz -- /bin/sh -x -c nc -zv -t -w 2 10.107.86.115 80'
Dec 28 22:54:06.542: INFO: stderr: "+ nc -zv -t -w 2 10.107.86.115 80\nConnection to 10.107.86.115 80 port [tcp/http] succeeded!\n"
Dec 28 22:54:06.542: INFO: stdout: ""
Dec 28 22:54:06.542: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:54:06.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9374" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:28.016 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":268,"skipped":4306,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:54:06.681: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Dec 28 22:54:06.810: INFO: Pod name pod-release: Found 0 pods out of 1
Dec 28 22:54:11.825: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:54:12.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7605" for this suite.

• [SLOW TEST:5.676 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":269,"skipped":4345,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:54:12.358: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1362
STEP: creating the pod
Dec 28 22:54:12.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3109'
Dec 28 22:54:13.143: INFO: stderr: ""
Dec 28 22:54:13.143: INFO: stdout: "pod/pause created\n"
Dec 28 22:54:13.143: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Dec 28 22:54:13.143: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-3109" to be "running and ready"
Dec 28 22:54:13.166: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 23.079246ms
Dec 28 22:54:16.048: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.904968334s
Dec 28 22:54:18.055: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.911248943s
Dec 28 22:54:20.061: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.917310352s
Dec 28 22:54:22.069: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.925246391s
Dec 28 22:54:24.085: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 10.941867635s
Dec 28 22:54:26.295: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 13.152099813s
Dec 28 22:54:28.304: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 15.160925005s
Dec 28 22:54:30.314: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 17.170725047s
Dec 28 22:54:30.314: INFO: Pod "pause" satisfied condition "running and ready"
Dec 28 22:54:30.314: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: adding the label testing-label with value testing-label-value to a pod
Dec 28 22:54:30.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-3109'
Dec 28 22:54:30.659: INFO: stderr: ""
Dec 28 22:54:30.659: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Dec 28 22:54:30.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3109'
Dec 28 22:54:30.862: INFO: stderr: ""
Dec 28 22:54:30.862: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          17s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Dec 28 22:54:30.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-3109'
Dec 28 22:54:30.988: INFO: stderr: ""
Dec 28 22:54:30.988: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Dec 28 22:54:30.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3109'
Dec 28 22:54:31.150: INFO: stderr: ""
Dec 28 22:54:31.150: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          18s   \n"
[AfterEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1369
STEP: using delete to clean up resources
Dec 28 22:54:31.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3109'
Dec 28 22:54:31.394: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 28 22:54:31.394: INFO: stdout: "pod \"pause\" force deleted\n"
Dec 28 22:54:31.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-3109'
Dec 28 22:54:31.642: INFO: stderr: "No resources found in kubectl-3109 namespace.\n"
Dec 28 22:54:31.642: INFO: stdout: ""
Dec 28 22:54:31.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-3109 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 28 22:54:31.786: INFO: stderr: ""
Dec 28 22:54:31.787: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:54:31.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3109" for this suite.

• [SLOW TEST:19.445 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1359
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":278,"completed":270,"skipped":4347,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:54:31.805: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 28 22:54:40.063: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:54:40.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1223" for this suite.

• [SLOW TEST:8.391 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":271,"skipped":4351,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:54:40.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on tmpfs
Dec 28 22:54:40.333: INFO: Waiting up to 5m0s for pod "pod-8d0416ee-224a-46c6-8230-97e7ca0684d4" in namespace "emptydir-3279" to be "success or failure"
Dec 28 22:54:40.346: INFO: Pod "pod-8d0416ee-224a-46c6-8230-97e7ca0684d4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.627115ms
Dec 28 22:54:42.353: INFO: Pod "pod-8d0416ee-224a-46c6-8230-97e7ca0684d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019575024s
Dec 28 22:54:44.363: INFO: Pod "pod-8d0416ee-224a-46c6-8230-97e7ca0684d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030159874s
Dec 28 22:54:46.370: INFO: Pod "pod-8d0416ee-224a-46c6-8230-97e7ca0684d4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037472282s
Dec 28 22:54:48.378: INFO: Pod "pod-8d0416ee-224a-46c6-8230-97e7ca0684d4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.044837532s
Dec 28 22:54:50.387: INFO: Pod "pod-8d0416ee-224a-46c6-8230-97e7ca0684d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.054031778s
STEP: Saw pod success
Dec 28 22:54:50.387: INFO: Pod "pod-8d0416ee-224a-46c6-8230-97e7ca0684d4" satisfied condition "success or failure"
Dec 28 22:54:50.391: INFO: Trying to get logs from node jerma-node pod pod-8d0416ee-224a-46c6-8230-97e7ca0684d4 container test-container: 
STEP: delete the pod
Dec 28 22:54:50.462: INFO: Waiting for pod pod-8d0416ee-224a-46c6-8230-97e7ca0684d4 to disappear
Dec 28 22:54:50.471: INFO: Pod pod-8d0416ee-224a-46c6-8230-97e7ca0684d4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:54:50.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3279" for this suite.

• [SLOW TEST:10.288 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":272,"skipped":4465,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:54:50.488: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Dec 28 22:54:59.243: INFO: Successfully updated pod "labelsupdatee76ad2a8-2b40-4117-9d94-35a5968f2416"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:55:01.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3598" for this suite.

• [SLOW TEST:10.804 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4468,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:55:01.293: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 28 22:55:01.506: INFO: Number of nodes with available pods: 0
Dec 28 22:55:01.506: INFO: Node jerma-node is running more than one daemon pod
Dec 28 22:55:02.532: INFO: Number of nodes with available pods: 0
Dec 28 22:55:02.532: INFO: Node jerma-node is running more than one daemon pod
Dec 28 22:55:03.914: INFO: Number of nodes with available pods: 0
Dec 28 22:55:03.914: INFO: Node jerma-node is running more than one daemon pod
Dec 28 22:55:04.543: INFO: Number of nodes with available pods: 0
Dec 28 22:55:04.543: INFO: Node jerma-node is running more than one daemon pod
Dec 28 22:55:05.521: INFO: Number of nodes with available pods: 0
Dec 28 22:55:05.521: INFO: Node jerma-node is running more than one daemon pod
Dec 28 22:55:07.450: INFO: Number of nodes with available pods: 0
Dec 28 22:55:07.450: INFO: Node jerma-node is running more than one daemon pod
Dec 28 22:55:07.520: INFO: Number of nodes with available pods: 0
Dec 28 22:55:07.520: INFO: Node jerma-node is running more than one daemon pod
Dec 28 22:55:08.674: INFO: Number of nodes with available pods: 0
Dec 28 22:55:08.674: INFO: Node jerma-node is running more than one daemon pod
Dec 28 22:55:09.536: INFO: Number of nodes with available pods: 0
Dec 28 22:55:09.536: INFO: Node jerma-node is running more than one daemon pod
Dec 28 22:55:10.606: INFO: Number of nodes with available pods: 2
Dec 28 22:55:10.607: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Dec 28 22:55:10.749: INFO: Number of nodes with available pods: 1
Dec 28 22:55:10.749: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 28 22:55:11.811: INFO: Number of nodes with available pods: 1
Dec 28 22:55:11.811: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 28 22:55:12.765: INFO: Number of nodes with available pods: 1
Dec 28 22:55:12.765: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 28 22:55:13.782: INFO: Number of nodes with available pods: 1
Dec 28 22:55:13.783: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 28 22:55:15.057: INFO: Number of nodes with available pods: 1
Dec 28 22:55:15.057: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 28 22:55:15.761: INFO: Number of nodes with available pods: 1
Dec 28 22:55:15.761: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 28 22:55:17.001: INFO: Number of nodes with available pods: 1
Dec 28 22:55:17.001: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 28 22:55:17.765: INFO: Number of nodes with available pods: 1
Dec 28 22:55:17.765: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 28 22:55:18.774: INFO: Number of nodes with available pods: 1
Dec 28 22:55:18.774: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 28 22:55:19.780: INFO: Number of nodes with available pods: 2
Dec 28 22:55:19.780: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9976, will wait for the garbage collector to delete the pods
Dec 28 22:55:19.881: INFO: Deleting DaemonSet.extensions daemon-set took: 36.50852ms
Dec 28 22:55:20.182: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.847594ms
Dec 28 22:55:36.898: INFO: Number of nodes with available pods: 0
Dec 28 22:55:36.898: INFO: Number of running nodes: 0, number of available pods: 0
Dec 28 22:55:36.902: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9976/daemonsets","resourceVersion":"10445192"},"items":null}

Dec 28 22:55:36.905: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9976/pods","resourceVersion":"10445192"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:55:36.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9976" for this suite.

• [SLOW TEST:35.633 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":274,"skipped":4513,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:55:36.927: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:55:37.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1414" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":278,"completed":275,"skipped":4522,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:55:37.062: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Dec 28 22:55:37.148: INFO: Waiting up to 5m0s for pod "downwardapi-volume-adbdeec6-e42d-42ba-bac7-8ebb860240f9" in namespace "downward-api-3240" to be "success or failure"
Dec 28 22:55:37.194: INFO: Pod "downwardapi-volume-adbdeec6-e42d-42ba-bac7-8ebb860240f9": Phase="Pending", Reason="", readiness=false. Elapsed: 45.739512ms
Dec 28 22:55:39.202: INFO: Pod "downwardapi-volume-adbdeec6-e42d-42ba-bac7-8ebb860240f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053634539s
Dec 28 22:55:41.211: INFO: Pod "downwardapi-volume-adbdeec6-e42d-42ba-bac7-8ebb860240f9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06325291s
Dec 28 22:55:43.218: INFO: Pod "downwardapi-volume-adbdeec6-e42d-42ba-bac7-8ebb860240f9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070079072s
Dec 28 22:55:45.226: INFO: Pod "downwardapi-volume-adbdeec6-e42d-42ba-bac7-8ebb860240f9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.077485004s
Dec 28 22:55:47.233: INFO: Pod "downwardapi-volume-adbdeec6-e42d-42ba-bac7-8ebb860240f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.084934269s
STEP: Saw pod success
Dec 28 22:55:47.233: INFO: Pod "downwardapi-volume-adbdeec6-e42d-42ba-bac7-8ebb860240f9" satisfied condition "success or failure"
Dec 28 22:55:47.237: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-adbdeec6-e42d-42ba-bac7-8ebb860240f9 container client-container: 
STEP: delete the pod
Dec 28 22:55:47.335: INFO: Waiting for pod downwardapi-volume-adbdeec6-e42d-42ba-bac7-8ebb860240f9 to disappear
Dec 28 22:55:47.392: INFO: Pod downwardapi-volume-adbdeec6-e42d-42ba-bac7-8ebb860240f9 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:55:47.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3240" for this suite.

• [SLOW TEST:10.359 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":276,"skipped":4526,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 28 22:55:47.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Dec 28 22:55:48.424: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Dec 28 22:55:50.452: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713170548, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713170548, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713170548, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713170548, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:55:52.463: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713170548, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713170548, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713170548, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713170548, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 22:55:54.489: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713170548, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713170548, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713170548, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713170548, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Dec 28 22:55:57.513: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Dec 28 22:55:57.521: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-429-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 28 22:55:58.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8243" for this suite.
STEP: Destroying namespace "webhook-8243-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.481 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":277,"skipped":4531,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSDec 28 22:55:58.903: INFO: Running AfterSuite actions on all nodes
Dec 28 22:55:58.903: INFO: Running AfterSuite actions on node 1
Dec 28 22:55:58.903: INFO: Skipping dumping logs from cluster
{"msg":"Test Suite completed","total":278,"completed":277,"skipped":4536,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}


Summarizing 1 Failure:

[Fail] [sig-cli] Kubectl client Guestbook application [It] should create and stop a working application  [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2315

Ran 278 of 4814 Specs in 6414.765 seconds
FAIL! -- 277 Passed | 1 Failed | 0 Pending | 4536 Skipped
--- FAIL: TestE2E (6414.87s)
FAIL