I0707 15:26:35.114453 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0707 15:26:35.115044 6 e2e.go:109] Starting e2e run "9fc662a3-b8b7-4d59-9ca2-685b86f49e7e" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1594135594 - Will randomize all specs Will run 278 of 4843 specs Jul 7 15:26:35.174: INFO: >>> kubeConfig: /root/.kube/config Jul 7 15:26:35.178: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jul 7 15:26:35.200: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jul 7 15:26:35.242: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jul 7 15:26:35.242: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jul 7 15:26:35.242: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jul 7 15:26:35.249: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jul 7 15:26:35.249: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jul 7 15:26:35.249: INFO: e2e test version: v1.17.8 Jul 7 15:26:35.251: INFO: kube-apiserver version: v1.17.5 Jul 7 15:26:35.251: INFO: >>> kubeConfig: /root/.kube/config Jul 7 15:26:35.291: INFO: Cluster IP family: ipv4 S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:26:35.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap Jul 7 15:26:35.421: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-962b7dbb-aed8-4450-863d-b79813a55eee STEP: Creating a pod to test consume configMaps Jul 7 15:26:35.441: INFO: Waiting up to 5m0s for pod "pod-configmaps-838fc44c-17d1-4d0d-bc8f-24855352ae30" in namespace "configmap-1136" to be "success or failure" Jul 7 15:26:35.445: INFO: Pod "pod-configmaps-838fc44c-17d1-4d0d-bc8f-24855352ae30": Phase="Pending", Reason="", readiness=false. Elapsed: 3.55073ms Jul 7 15:26:37.449: INFO: Pod "pod-configmaps-838fc44c-17d1-4d0d-bc8f-24855352ae30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007889282s Jul 7 15:26:39.591: INFO: Pod "pod-configmaps-838fc44c-17d1-4d0d-bc8f-24855352ae30": Phase="Running", Reason="", readiness=true. Elapsed: 4.14970675s Jul 7 15:26:41.595: INFO: Pod "pod-configmaps-838fc44c-17d1-4d0d-bc8f-24855352ae30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.153849218s STEP: Saw pod success Jul 7 15:26:41.595: INFO: Pod "pod-configmaps-838fc44c-17d1-4d0d-bc8f-24855352ae30" satisfied condition "success or failure" Jul 7 15:26:41.597: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-838fc44c-17d1-4d0d-bc8f-24855352ae30 container configmap-volume-test: STEP: delete the pod Jul 7 15:26:41.665: INFO: Waiting for pod pod-configmaps-838fc44c-17d1-4d0d-bc8f-24855352ae30 to disappear Jul 7 15:26:41.672: INFO: Pod pod-configmaps-838fc44c-17d1-4d0d-bc8f-24855352ae30 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:26:41.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1136" for this suite. • [SLOW TEST:6.388 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":1,"skipped":1,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:26:41.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Jul 7 15:26:41.783: INFO: Waiting up to 5m0s for pod "pod-fd461be0-00ac-473f-958d-dede8d08f94a" in namespace "emptydir-7200" to be "success or failure" Jul 7 15:26:41.793: INFO: Pod "pod-fd461be0-00ac-473f-958d-dede8d08f94a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.004574ms Jul 7 15:26:43.931: INFO: Pod "pod-fd461be0-00ac-473f-958d-dede8d08f94a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.147687707s Jul 7 15:26:45.935: INFO: Pod "pod-fd461be0-00ac-473f-958d-dede8d08f94a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.151791226s STEP: Saw pod success Jul 7 15:26:45.935: INFO: Pod "pod-fd461be0-00ac-473f-958d-dede8d08f94a" satisfied condition "success or failure" Jul 7 15:26:45.938: INFO: Trying to get logs from node jerma-worker pod pod-fd461be0-00ac-473f-958d-dede8d08f94a container test-container: STEP: delete the pod Jul 7 15:26:46.052: INFO: Waiting for pod pod-fd461be0-00ac-473f-958d-dede8d08f94a to disappear Jul 7 15:26:46.056: INFO: Pod pod-fd461be0-00ac-473f-958d-dede8d08f94a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:26:46.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7200" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":36,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:26:46.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0707 15:26:56.214628 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 7 15:26:56.214: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:26:56.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3667" for this suite. • [SLOW TEST:10.157 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":3,"skipped":68,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:26:56.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 7 15:26:56.317: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1e6731d3-732d-4b11-88f7-6d2475bb525e" in namespace "downward-api-2910" to be "success or failure" Jul 7 15:26:56.344: INFO: Pod "downwardapi-volume-1e6731d3-732d-4b11-88f7-6d2475bb525e": Phase="Pending", Reason="", readiness=false. Elapsed: 27.307504ms Jul 7 15:26:58.608: INFO: Pod "downwardapi-volume-1e6731d3-732d-4b11-88f7-6d2475bb525e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.291351305s Jul 7 15:27:00.612: INFO: Pod "downwardapi-volume-1e6731d3-732d-4b11-88f7-6d2475bb525e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.295419214s STEP: Saw pod success Jul 7 15:27:00.612: INFO: Pod "downwardapi-volume-1e6731d3-732d-4b11-88f7-6d2475bb525e" satisfied condition "success or failure" Jul 7 15:27:00.615: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-1e6731d3-732d-4b11-88f7-6d2475bb525e container client-container: STEP: delete the pod Jul 7 15:27:00.633: INFO: Waiting for pod downwardapi-volume-1e6731d3-732d-4b11-88f7-6d2475bb525e to disappear Jul 7 15:27:00.661: INFO: Pod downwardapi-volume-1e6731d3-732d-4b11-88f7-6d2475bb525e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:27:00.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2910" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":4,"skipped":87,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:27:00.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jul 7 15:27:08.839: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 7 15:27:08.847: INFO: Pod pod-with-prestop-http-hook still exists Jul 7 15:27:10.847: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 7 15:27:10.904: INFO: Pod pod-with-prestop-http-hook still exists Jul 7 15:27:12.847: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 7 15:27:12.852: INFO: Pod pod-with-prestop-http-hook still exists Jul 7 15:27:14.847: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 7 15:27:14.852: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:27:14.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4840" for this suite. • [SLOW TEST:14.212 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":133,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:27:14.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 7 15:27:14.993: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d160267b-5eba-4332-adb0-4969f17e2e85" in namespace "projected-7528" to be "success or failure" Jul 7 15:27:15.010: INFO: Pod "downwardapi-volume-d160267b-5eba-4332-adb0-4969f17e2e85": Phase="Pending", Reason="", readiness=false. Elapsed: 17.129121ms Jul 7 15:27:17.026: INFO: Pod "downwardapi-volume-d160267b-5eba-4332-adb0-4969f17e2e85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033418322s Jul 7 15:27:19.255: INFO: Pod "downwardapi-volume-d160267b-5eba-4332-adb0-4969f17e2e85": Phase="Pending", Reason="", readiness=false. Elapsed: 4.26238363s Jul 7 15:27:21.477: INFO: Pod "downwardapi-volume-d160267b-5eba-4332-adb0-4969f17e2e85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.484087511s STEP: Saw pod success Jul 7 15:27:21.477: INFO: Pod "downwardapi-volume-d160267b-5eba-4332-adb0-4969f17e2e85" satisfied condition "success or failure" Jul 7 15:27:21.507: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-d160267b-5eba-4332-adb0-4969f17e2e85 container client-container: STEP: delete the pod Jul 7 15:27:21.806: INFO: Waiting for pod downwardapi-volume-d160267b-5eba-4332-adb0-4969f17e2e85 to disappear Jul 7 15:27:21.890: INFO: Pod downwardapi-volume-d160267b-5eba-4332-adb0-4969f17e2e85 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:27:21.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7528" for this suite. • [SLOW TEST:7.010 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":147,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:27:21.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:27:22.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-5288" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":7,"skipped":185,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:27:22.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-9056 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-9056 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9056 Jul 7 15:27:22.987: INFO: Found 0 stateful pods, waiting for 1 Jul 7 15:27:33.441: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Jul 7 15:27:42.992: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jul 7 15:27:42.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9056 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 7 15:27:47.258: INFO: stderr: "I0707 15:27:47.119053 28 log.go:172] (0xc0000f53f0) (0xc0003b7e00) Create stream\nI0707 15:27:47.119115 28 log.go:172] (0xc0000f53f0) (0xc0003b7e00) Stream added, broadcasting: 1\nI0707 15:27:47.121798 28 log.go:172] (0xc0000f53f0) Reply frame received for 1\nI0707 15:27:47.121860 28 log.go:172] (0xc0000f53f0) (0xc0006e3900) Create stream\nI0707 15:27:47.121884 28 log.go:172] (0xc0000f53f0) (0xc0006e3900) Stream added, broadcasting: 3\nI0707 15:27:47.122768 28 log.go:172] (0xc0000f53f0) Reply frame received for 3\nI0707 15:27:47.122833 28 log.go:172] (0xc0000f53f0) (0xc000688000) Create stream\nI0707 15:27:47.122850 28 log.go:172] (0xc0000f53f0) (0xc000688000) Stream added, broadcasting: 5\nI0707 15:27:47.123844 28 log.go:172] (0xc0000f53f0) Reply frame received for 5\nI0707 15:27:47.188340 28 log.go:172] (0xc0000f53f0) Data frame received for 5\nI0707 15:27:47.188369 28 log.go:172] (0xc000688000) (5) Data frame handling\nI0707 15:27:47.188385 28 log.go:172] (0xc000688000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0707 15:27:47.250002 28 log.go:172] (0xc0000f53f0) Data frame received for 5\nI0707 15:27:47.250044 28 log.go:172] (0xc000688000) (5) Data frame handling\nI0707 15:27:47.250071 28 log.go:172] (0xc0000f53f0) Data frame received for 3\nI0707 15:27:47.250081 28 log.go:172] (0xc0006e3900) (3) Data frame handling\nI0707 15:27:47.250095 28 log.go:172] (0xc0006e3900) (3) Data frame sent\nI0707 15:27:47.250104 28 log.go:172] (0xc0000f53f0) Data frame received for 3\nI0707 15:27:47.250112 28 log.go:172] (0xc0006e3900) (3) Data frame handling\nI0707 15:27:47.252187 28 log.go:172] (0xc0000f53f0) Data frame received for 1\nI0707 15:27:47.252205 28 log.go:172] (0xc0003b7e00) (1) Data frame handling\nI0707 15:27:47.252216 28 log.go:172] (0xc0003b7e00) (1) Data frame sent\nI0707 15:27:47.252228 28 log.go:172] (0xc0000f53f0) (0xc0003b7e00) Stream removed, broadcasting: 1\nI0707 15:27:47.252245 28 log.go:172] (0xc0000f53f0) Go away received\nI0707 15:27:47.252781 28 log.go:172] (0xc0000f53f0) (0xc0003b7e00) Stream removed, broadcasting: 1\nI0707 15:27:47.252803 28 log.go:172] (0xc0000f53f0) (0xc0006e3900) Stream removed, broadcasting: 3\nI0707 15:27:47.252812 28 log.go:172] (0xc0000f53f0) (0xc000688000) Stream removed, broadcasting: 5\n" Jul 7 15:27:47.258: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 7 15:27:47.258: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 7 15:27:47.261: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jul 7 15:27:57.483: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 7 15:27:57.483: INFO: Waiting for statefulset status.replicas updated to 0 Jul 7 15:27:57.786: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999736s Jul 7 15:27:59.010: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.704484077s Jul 7 15:28:00.184: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.479962451s Jul 7 15:28:01.190: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.305602207s Jul 7 15:28:02.526: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.300068265s Jul 7 15:28:03.879: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.963832993s Jul 7 15:28:04.884: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.610588051s Jul 7 15:28:05.968: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.60612964s Jul 7 15:28:06.971: INFO: Verifying statefulset ss doesn't scale past 1 for another 522.339367ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9056 Jul 7 15:28:07.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9056 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 7 15:28:08.418: INFO: stderr: "I0707 15:28:08.116757 55 log.go:172] (0xc000969600) (0xc0009408c0) Create stream\nI0707 15:28:08.116815 55 log.go:172] (0xc000969600) (0xc0009408c0) Stream added, broadcasting: 1\nI0707 15:28:08.119655 55 log.go:172] (0xc000969600) Reply frame received for 1\nI0707 15:28:08.119681 55 log.go:172] (0xc000969600) (0xc0005d7040) Create stream\nI0707 15:28:08.119688 55 log.go:172] (0xc000969600) (0xc0005d7040) Stream added, broadcasting: 3\nI0707 15:28:08.120217 55 log.go:172] (0xc000969600) Reply frame received for 3\nI0707 15:28:08.120236 55 log.go:172] (0xc000969600) (0xc0002f0000) Create stream\nI0707 15:28:08.120241 55 log.go:172] (0xc000969600) (0xc0002f0000) Stream added, broadcasting: 5\nI0707 15:28:08.120761 55 log.go:172] (0xc000969600) Reply frame received for 5\nI0707 15:28:08.172348 55 log.go:172] (0xc000969600) Data frame received for 5\nI0707 15:28:08.172369 55 log.go:172] (0xc0002f0000) (5) Data frame handling\nI0707 15:28:08.172384 55 log.go:172] (0xc0002f0000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0707 15:28:08.410403 55 log.go:172] (0xc000969600) Data frame received for 5\nI0707 15:28:08.410435 55 log.go:172] (0xc0002f0000) (5) Data frame handling\nI0707 15:28:08.410464 55 log.go:172] (0xc000969600) Data frame received for 3\nI0707 15:28:08.410471 55 log.go:172] (0xc0005d7040) (3) Data frame handling\nI0707 15:28:08.410480 55 log.go:172] (0xc0005d7040) (3) Data frame sent\nI0707 15:28:08.410709 55 log.go:172] (0xc000969600) Data frame received for 3\nI0707 15:28:08.410731 55 log.go:172] (0xc0005d7040) (3) Data frame handling\nI0707 15:28:08.413794 55 log.go:172] (0xc000969600) Data frame received for 1\nI0707 15:28:08.413844 55 log.go:172] (0xc0009408c0) (1) Data frame handling\nI0707 15:28:08.413885 55 log.go:172] (0xc0009408c0) (1) Data frame sent\nI0707 15:28:08.413895 55 log.go:172] (0xc000969600) (0xc0009408c0) Stream removed, broadcasting: 1\nI0707 15:28:08.413908 55 log.go:172] (0xc000969600) Go away received\nI0707 15:28:08.414197 55 log.go:172] (0xc000969600) (0xc0009408c0) Stream removed, broadcasting: 1\nI0707 15:28:08.414208 55 log.go:172] (0xc000969600) (0xc0005d7040) Stream removed, broadcasting: 3\nI0707 15:28:08.414214 55 log.go:172] (0xc000969600) (0xc0002f0000) Stream removed, broadcasting: 5\n" Jul 7 15:28:08.418: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 7 15:28:08.418: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 7 15:28:08.537: INFO: Found 1 stateful pods, waiting for 3 Jul 7 15:28:18.542: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jul 7 15:28:18.542: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jul 7 15:28:18.542: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Jul 7 15:28:28.541: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jul 7 15:28:28.541: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jul 7 15:28:28.541: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jul 7 15:28:28.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9056 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 7 15:28:28.781: INFO: stderr: "I0707 15:28:28.666687 70 log.go:172] (0xc0006b88f0) (0xc0006f9cc0) Create stream\nI0707 15:28:28.666748 70 log.go:172] (0xc0006b88f0) (0xc0006f9cc0) Stream added, broadcasting: 1\nI0707 15:28:28.668369 70 log.go:172] (0xc0006b88f0) Reply frame received for 1\nI0707 15:28:28.668399 70 log.go:172] (0xc0006b88f0) (0xc0006b4000) Create stream\nI0707 15:28:28.668405 70 log.go:172] (0xc0006b88f0) (0xc0006b4000) Stream added, broadcasting: 3\nI0707 15:28:28.669073 70 log.go:172] (0xc0006b88f0) Reply frame received for 3\nI0707 15:28:28.669099 70 log.go:172] (0xc0006b88f0) (0xc0006f9ea0) Create stream\nI0707 15:28:28.669256 70 log.go:172] (0xc0006b88f0) (0xc0006f9ea0) Stream added, broadcasting: 5\nI0707 15:28:28.669914 70 log.go:172] (0xc0006b88f0) Reply frame received for 5\nI0707 15:28:28.721404 70 log.go:172] (0xc0006b88f0) Data frame received for 5\nI0707 15:28:28.721430 70 log.go:172] (0xc0006f9ea0) (5) Data frame handling\nI0707 15:28:28.721444 70 log.go:172] (0xc0006f9ea0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0707 15:28:28.773073 70 log.go:172] (0xc0006b88f0) Data frame received for 3\nI0707 15:28:28.773104 70 log.go:172] (0xc0006b4000) (3) Data frame handling\nI0707 15:28:28.773248 70 log.go:172] (0xc0006b4000) (3) Data frame sent\nI0707 15:28:28.773260 70 log.go:172] (0xc0006b88f0) Data frame received for 3\nI0707 15:28:28.773266 70 log.go:172] (0xc0006b4000) (3) Data frame handling\nI0707 15:28:28.773506 70 log.go:172] (0xc0006b88f0) Data frame received for 5\nI0707 15:28:28.773543 70 log.go:172] (0xc0006f9ea0) (5) Data frame handling\nI0707 15:28:28.775450 70 log.go:172] (0xc0006b88f0) Data frame received for 1\nI0707 15:28:28.775465 70 log.go:172] (0xc0006f9cc0) (1) Data frame handling\nI0707 15:28:28.775471 70 log.go:172] (0xc0006f9cc0) (1) Data frame sent\nI0707 15:28:28.775484 70 log.go:172] (0xc0006b88f0) (0xc0006f9cc0) Stream removed, broadcasting: 1\nI0707 15:28:28.775516 70 log.go:172] (0xc0006b88f0) Go away received\nI0707 15:28:28.775743 70 log.go:172] (0xc0006b88f0) (0xc0006f9cc0) Stream removed, broadcasting: 1\nI0707 15:28:28.775755 70 log.go:172] (0xc0006b88f0) (0xc0006b4000) Stream removed, broadcasting: 3\nI0707 15:28:28.775760 70 log.go:172] (0xc0006b88f0) (0xc0006f9ea0) Stream removed, broadcasting: 5\n" Jul 7 15:28:28.781: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 7 15:28:28.781: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 7 15:28:28.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9056 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 7 15:28:29.221: INFO: stderr: "I0707 15:28:28.917259 92 log.go:172] (0xc0000f5290) (0xc000560000) Create stream\nI0707 15:28:28.917303 92 log.go:172] (0xc0000f5290) (0xc000560000) Stream added, broadcasting: 1\nI0707 15:28:28.919697 92 log.go:172] (0xc0000f5290) Reply frame received for 1\nI0707 15:28:28.919736 92 log.go:172] (0xc0000f5290) (0xc000560140) Create stream\nI0707 15:28:28.919747 92 log.go:172] (0xc0000f5290) (0xc000560140) Stream added, broadcasting: 3\nI0707 15:28:28.920592 92 log.go:172] (0xc0000f5290) Reply frame received for 3\nI0707 15:28:28.920633 92 log.go:172] (0xc0000f5290) (0xc00055fa40) Create stream\nI0707 15:28:28.920649 92 log.go:172] (0xc0000f5290) (0xc00055fa40) Stream added, broadcasting: 5\nI0707 15:28:28.921514 92 log.go:172] (0xc0000f5290) Reply frame received for 5\nI0707 15:28:28.980458 92 log.go:172] (0xc0000f5290) Data frame received for 5\nI0707 15:28:28.980491 92 log.go:172] (0xc00055fa40) (5) Data frame handling\nI0707 15:28:28.980513 92 log.go:172] (0xc00055fa40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0707 15:28:29.211882 92 log.go:172] (0xc0000f5290) Data frame received for 3\nI0707 15:28:29.211905 92 log.go:172] (0xc000560140) (3) Data frame handling\nI0707 15:28:29.211922 92 log.go:172] (0xc000560140) (3) Data frame sent\nI0707 15:28:29.213503 92 log.go:172] (0xc0000f5290) Data frame received for 3\nI0707 15:28:29.213534 92 log.go:172] (0xc000560140) (3) Data frame handling\nI0707 15:28:29.213685 92 log.go:172] (0xc0000f5290) Data frame received for 5\nI0707 15:28:29.213705 92 log.go:172] (0xc00055fa40) (5) Data frame handling\nI0707 15:28:29.215238 92 log.go:172] (0xc0000f5290) Data frame received for 1\nI0707 15:28:29.215256 92 log.go:172] (0xc000560000) (1) Data frame handling\nI0707 15:28:29.215273 92 log.go:172] (0xc000560000) (1) Data frame sent\nI0707 15:28:29.215287 92 log.go:172] (0xc0000f5290) (0xc000560000) Stream removed, broadcasting: 1\nI0707 15:28:29.215299 92 log.go:172] (0xc0000f5290) Go away received\nI0707 15:28:29.215701 92 log.go:172] (0xc0000f5290) (0xc000560000) Stream removed, broadcasting: 1\nI0707 15:28:29.215719 92 log.go:172] (0xc0000f5290) (0xc000560140) Stream removed, broadcasting: 3\nI0707 15:28:29.215730 92 log.go:172] (0xc0000f5290) (0xc00055fa40) Stream removed, broadcasting: 5\n" Jul 7 15:28:29.221: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 7 15:28:29.221: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 7 15:28:29.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9056 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 7 15:28:30.716: INFO: stderr: "I0707 15:28:30.249306 112 log.go:172] (0xc000a8abb0) (0xc0006fdd60) Create stream\nI0707 15:28:30.249400 112 log.go:172] (0xc000a8abb0) (0xc0006fdd60) Stream added, broadcasting: 1\nI0707 15:28:30.252529 112 log.go:172] (0xc000a8abb0) Reply frame received for 1\nI0707 15:28:30.252593 112 log.go:172] (0xc000a8abb0) (0xc0006fde00) Create stream\nI0707 15:28:30.252617 112 log.go:172] (0xc000a8abb0) (0xc0006fde00) Stream added, broadcasting: 3\nI0707 15:28:30.253799 112 log.go:172] (0xc000a8abb0) Reply frame received for 3\nI0707 15:28:30.253861 112 log.go:172] (0xc000a8abb0) (0xc0006fdea0) Create stream\nI0707 15:28:30.253888 112 log.go:172] (0xc000a8abb0) (0xc0006fdea0) Stream added, broadcasting: 5\nI0707 15:28:30.254691 112 log.go:172] (0xc000a8abb0) Reply frame received for 5\nI0707 15:28:30.331127 112 log.go:172] (0xc000a8abb0) Data frame received for 5\nI0707 15:28:30.331155 112 log.go:172] (0xc0006fdea0) (5) Data frame handling\nI0707 15:28:30.331172 112 log.go:172] (0xc0006fdea0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0707 15:28:30.708126 112 log.go:172] (0xc000a8abb0) Data frame received for 3\nI0707 15:28:30.708161 112 log.go:172] (0xc0006fde00) (3) Data frame handling\nI0707 15:28:30.708191 112 log.go:172] (0xc0006fde00) (3) Data frame sent\nI0707 15:28:30.708204 112 log.go:172] (0xc000a8abb0) Data frame received for 3\nI0707 15:28:30.708215 112 log.go:172] (0xc0006fde00) (3) Data frame handling\nI0707 15:28:30.708410 112 log.go:172] (0xc000a8abb0) Data frame received for 5\nI0707 15:28:30.708446 112 log.go:172] (0xc0006fdea0) (5) Data frame handling\nI0707 15:28:30.711226 112 log.go:172] (0xc000a8abb0) Data frame received for 1\nI0707 15:28:30.711289 112 log.go:172] (0xc0006fdd60) (1) Data frame handling\nI0707 15:28:30.711315 112 log.go:172] (0xc0006fdd60) (1) Data frame sent\nI0707 15:28:30.711330 112 log.go:172] (0xc000a8abb0) (0xc0006fdd60) Stream removed, broadcasting: 1\nI0707 15:28:30.711663 112 log.go:172] (0xc000a8abb0) (0xc0006fdd60) Stream removed, broadcasting: 1\nI0707 15:28:30.711680 112 log.go:172] (0xc000a8abb0) (0xc0006fde00) Stream removed, broadcasting: 3\nI0707 15:28:30.711823 112 log.go:172] (0xc000a8abb0) (0xc0006fdea0) Stream removed, broadcasting: 5\n" Jul 7 15:28:30.716: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 7 15:28:30.716: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 7 15:28:30.716: INFO: Waiting for statefulset status.replicas updated to 0 Jul 7 15:28:30.848: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jul 7 15:28:40.857: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 7 15:28:40.857: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jul 7 15:28:40.857: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jul 7 15:28:40.868: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999685s Jul 7 15:28:41.933: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.997026991s Jul 7 15:28:43.131: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.931686905s Jul 7 15:28:44.136: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.734438767s Jul 7 15:28:45.615: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.728759378s Jul 7 15:28:46.809: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.249555795s Jul 7 15:28:47.820: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.055756482s Jul 7 15:28:48.826: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.044704702s Jul 7 15:28:49.831: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.039411821s Jul 7 15:28:50.836: INFO: Verifying statefulset ss doesn't scale past 3 for another 33.846937ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9056 Jul 7 15:28:51.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9056 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 7 15:28:52.046: INFO: stderr: "I0707 15:28:51.967768 128 log.go:172] (0xc0008d66e0) (0xc000665ea0) Create stream\nI0707 15:28:51.967829 128 log.go:172] (0xc0008d66e0) (0xc000665ea0) Stream added, broadcasting: 1\nI0707 15:28:51.970091 128 log.go:172] (0xc0008d66e0) Reply frame received for 1\nI0707 15:28:51.970124 128 log.go:172] (0xc0008d66e0) (0xc0005b06e0) Create stream\nI0707 15:28:51.970133 128 log.go:172] (0xc0008d66e0) (0xc0005b06e0) Stream added, broadcasting: 3\nI0707 15:28:51.970765 128 log.go:172] (0xc0008d66e0) Reply frame received for 3\nI0707 15:28:51.970788 128 log.go:172] (0xc0008d66e0) (0xc000665f40) Create stream\nI0707 15:28:51.970796 128 log.go:172] (0xc0008d66e0) (0xc000665f40) Stream added, broadcasting: 5\nI0707 15:28:51.971499 128 log.go:172] (0xc0008d66e0) Reply frame received for 5\nI0707 15:28:52.039621 128 log.go:172] (0xc0008d66e0) Data frame received for 5\nI0707 15:28:52.039663 128 log.go:172] (0xc000665f40) (5) Data frame handling\nI0707 15:28:52.039720 128 log.go:172] (0xc000665f40) (5) Data frame sent\nI0707 15:28:52.039756 128 log.go:172] (0xc0008d66e0) Data frame received for 5\nI0707 15:28:52.039787 128 log.go:172] (0xc000665f40) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0707 15:28:52.039842 128 log.go:172] (0xc0008d66e0) Data frame received for 3\nI0707 15:28:52.039863 128 log.go:172] (0xc0005b06e0) (3) Data frame handling\nI0707 15:28:52.039888 128 log.go:172] (0xc0005b06e0) (3) Data frame sent\nI0707 15:28:52.039912 128 log.go:172] (0xc0008d66e0) Data frame received for 3\nI0707 15:28:52.039933 128 log.go:172] (0xc0005b06e0) (3) Data frame handling\nI0707 15:28:52.041099 128 log.go:172] (0xc0008d66e0) Data frame received for 1\nI0707 15:28:52.041301 128 log.go:172] (0xc000665ea0) (1) Data frame handling\nI0707 15:28:52.041325 128 log.go:172] (0xc000665ea0) (1) Data frame sent\nI0707 15:28:52.041342 128 log.go:172] (0xc0008d66e0) (0xc000665ea0) Stream removed, broadcasting: 1\nI0707 15:28:52.041356 128 log.go:172] (0xc0008d66e0) Go away received\nI0707 15:28:52.041694 128 log.go:172] (0xc0008d66e0) (0xc000665ea0) Stream removed, broadcasting: 1\nI0707 15:28:52.041716 128 log.go:172] (0xc0008d66e0) (0xc0005b06e0) Stream removed, broadcasting: 3\nI0707 15:28:52.041725 128 log.go:172] (0xc0008d66e0) (0xc000665f40) Stream removed, broadcasting: 5\n" Jul 7 15:28:52.046: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 7 15:28:52.046: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 7 15:28:52.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9056 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 7 15:28:52.239: INFO: stderr: "I0707 15:28:52.178700 148 log.go:172] (0xc00097c0b0) (0xc0002654a0) Create stream\nI0707 15:28:52.178763 148 log.go:172] (0xc00097c0b0) (0xc0002654a0) Stream added, broadcasting: 1\nI0707 15:28:52.181524 148 log.go:172] (0xc00097c0b0) Reply frame received for 1\nI0707 15:28:52.181574 148 log.go:172] (0xc00097c0b0) (0xc000990000) Create stream\nI0707 15:28:52.181589 148 log.go:172] (0xc00097c0b0) (0xc000990000) Stream added, broadcasting: 3\nI0707 15:28:52.182548 148 log.go:172] (0xc00097c0b0) Reply frame received for 3\nI0707 15:28:52.182594 148 log.go:172] (0xc00097c0b0) (0xc00066ba40) Create stream\nI0707 15:28:52.182614 148 log.go:172] (0xc00097c0b0) (0xc00066ba40) Stream added, broadcasting: 5\nI0707 15:28:52.183548 148 log.go:172] (0xc00097c0b0) Reply frame received for 5\nI0707 15:28:52.232891 148 log.go:172] (0xc00097c0b0) Data frame received for 5\nI0707 15:28:52.232914 148 log.go:172] (0xc00066ba40) (5) Data frame handling\nI0707 15:28:52.232924 148 log.go:172] (0xc00066ba40) (5) Data frame sent\nI0707 15:28:52.232931 148 log.go:172] (0xc00097c0b0) Data frame received for 5\nI0707 15:28:52.232935 148 log.go:172] (0xc00066ba40) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0707 15:28:52.232954 148 log.go:172] (0xc00097c0b0) Data frame received for 3\nI0707 15:28:52.232962 148 log.go:172] (0xc000990000) (3) Data frame handling\nI0707 15:28:52.232972 148 log.go:172] (0xc000990000) (3) Data frame sent\nI0707 15:28:52.233016 148 log.go:172] (0xc00097c0b0) Data frame received for 3\nI0707 15:28:52.233046 148 log.go:172] (0xc000990000) (3) Data frame handling\nI0707 15:28:52.234669 148 log.go:172] (0xc00097c0b0) Data frame received for 1\nI0707 15:28:52.234698 148 log.go:172] (0xc0002654a0) (1) Data frame handling\nI0707 15:28:52.234731 148 log.go:172] (0xc0002654a0) (1) Data frame sent\nI0707 15:28:52.234758 148 log.go:172] (0xc00097c0b0) (0xc0002654a0) Stream removed, broadcasting: 1\nI0707 15:28:52.234781 148 log.go:172] (0xc00097c0b0) Go away received\nI0707 15:28:52.235017 148 log.go:172] (0xc00097c0b0) (0xc0002654a0) Stream removed, broadcasting: 1\nI0707 15:28:52.235029 148 log.go:172] (0xc00097c0b0) (0xc000990000) Stream removed, broadcasting: 3\nI0707 15:28:52.235035 148 log.go:172] (0xc00097c0b0) (0xc00066ba40) Stream removed, broadcasting: 5\n" Jul 7 15:28:52.239: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 7 15:28:52.239: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 7 15:28:52.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9056 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 7 15:28:52.451: INFO: stderr: "I0707 15:28:52.382217 170 log.go:172] (0xc000b49760) (0xc0009446e0) Create stream\nI0707 15:28:52.382297 170 log.go:172] (0xc000b49760) (0xc0009446e0) Stream added, broadcasting: 1\nI0707 15:28:52.387516 170 log.go:172] (0xc000b49760) Reply frame received for 1\nI0707 15:28:52.387569 170 log.go:172] (0xc000b49760) (0xc0006c08c0) Create stream\nI0707 15:28:52.387589 170 log.go:172] (0xc000b49760) (0xc0006c08c0) Stream added, broadcasting: 3\nI0707 15:28:52.388570 170 log.go:172] (0xc000b49760) Reply frame received for 3\nI0707 15:28:52.388599 170 log.go:172] (0xc000b49760) (0xc0004c9680) Create stream\nI0707 15:28:52.388609 170 log.go:172] (0xc000b49760) (0xc0004c9680) Stream added, broadcasting: 5\nI0707 15:28:52.389618 170 log.go:172] (0xc000b49760) Reply frame received for 5\nI0707 15:28:52.443667 170 log.go:172] (0xc000b49760) Data frame received for 3\nI0707 15:28:52.443730 170 log.go:172] (0xc0006c08c0) (3) Data frame handling\nI0707 15:28:52.443757 170 log.go:172] (0xc000b49760) Data frame received for 5\nI0707 15:28:52.443952 170 log.go:172] (0xc0004c9680) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0707 15:28:52.443989 170 log.go:172] (0xc0006c08c0) (3) Data frame sent\nI0707 15:28:52.444043 170 log.go:172] (0xc000b49760) Data frame received for 3\nI0707 15:28:52.444067 170 log.go:172] (0xc0006c08c0) (3) Data frame handling\nI0707 15:28:52.444094 170 log.go:172] (0xc0004c9680) (5) Data frame sent\nI0707 15:28:52.444110 170 log.go:172] (0xc000b49760) Data frame received for 5\nI0707 15:28:52.444121 170 log.go:172] (0xc0004c9680) (5) Data frame handling\nI0707 15:28:52.446155 170 log.go:172] (0xc000b49760) Data frame received for 1\nI0707 15:28:52.446188 170 log.go:172] (0xc0009446e0) (1) Data frame handling\nI0707 15:28:52.446223 170 log.go:172] (0xc0009446e0) (1) Data frame sent\nI0707 15:28:52.446255 170 log.go:172] (0xc000b49760) (0xc0009446e0) Stream removed, broadcasting: 1\nI0707 15:28:52.446291 170 log.go:172] (0xc000b49760) Go away received\nI0707 15:28:52.446693 170 log.go:172] (0xc000b49760) (0xc0009446e0) Stream removed, broadcasting: 1\nI0707 15:28:52.446718 170 log.go:172] (0xc000b49760) (0xc0006c08c0) Stream removed, broadcasting: 3\nI0707 15:28:52.446731 170 log.go:172] (0xc000b49760) (0xc0004c9680) Stream removed, broadcasting: 5\n" Jul 7 15:28:52.451: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 7 15:28:52.451: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 7 15:28:52.451: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jul 7 15:29:32.468: INFO: Deleting all statefulset in ns statefulset-9056 Jul 7 15:29:32.471: INFO: Scaling statefulset ss to 0 Jul 7 15:29:32.479: INFO: Waiting for statefulset status.replicas updated to 0 Jul 7 15:29:32.482: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:29:32.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9056" for this suite. • [SLOW TEST:129.987 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":8,"skipped":187,"failed":0} SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:29:32.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jul 7 15:29:33.560: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:29:33.591: INFO: Number of nodes with available pods: 0 Jul 7 15:29:33.591: INFO: Node jerma-worker is running more than one daemon pod Jul 7 15:29:34.695: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:29:34.699: INFO: Number of nodes with available pods: 0 Jul 7 15:29:34.699: INFO: Node jerma-worker is running more than one daemon pod Jul 7 15:29:35.597: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:29:35.601: INFO: Number of nodes with available pods: 0 Jul 7 15:29:35.601: INFO: Node jerma-worker is running more than one daemon pod Jul 7 15:29:37.220: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:29:37.223: INFO: Number of nodes with available pods: 0 Jul 7 15:29:37.223: INFO: Node jerma-worker is running more than one daemon pod Jul 7 15:29:38.062: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:29:38.341: INFO: Number of nodes with available pods: 0 Jul 7 15:29:38.341: INFO: Node jerma-worker is running more than one daemon pod Jul 7 15:29:38.749: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:29:38.753: INFO: Number of nodes with available pods: 0 Jul 7 15:29:38.753: INFO: Node jerma-worker is running more than one daemon pod Jul 7 15:29:39.779: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:29:39.876: INFO: Number of nodes with available pods: 0 Jul 7 15:29:39.876: INFO: Node jerma-worker is running more than one daemon pod Jul 7 15:29:40.654: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:29:41.062: INFO: Number of nodes with available pods: 0 Jul 7 15:29:41.062: INFO: Node jerma-worker is running more than one daemon pod Jul 7 15:29:41.731: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:29:42.197: INFO: Number of nodes with available pods: 0 Jul 7 15:29:42.197: INFO: Node jerma-worker is running more than one daemon pod Jul 7 15:29:43.236: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:29:43.528: INFO: Number of nodes with available pods: 0 Jul 7 15:29:43.528: INFO: Node jerma-worker is running more than one daemon pod Jul 7 15:29:43.832: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:29:44.695: INFO: Number of nodes with available pods: 0 Jul 7 15:29:44.695: INFO: Node jerma-worker is running more than one daemon pod Jul 7 15:29:45.780: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:29:46.527: INFO: Number of nodes with available pods: 0 Jul 7 15:29:46.527: INFO: Node jerma-worker is running more than one daemon pod Jul 7 15:29:46.923: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:29:47.127: INFO: Number of nodes with available pods: 1 Jul 7 15:29:47.127: INFO: Node jerma-worker2 is running more than one daemon pod Jul 7 15:29:47.821: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:29:48.158: INFO: Number of nodes with available pods: 2 Jul 7 15:29:48.158: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jul 7 15:29:49.081: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:29:49.366: INFO: Number of nodes with available pods: 2 Jul 7 15:29:49.366: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2871, will wait for the garbage collector to delete the pods Jul 7 15:29:52.669: INFO: Deleting DaemonSet.extensions daemon-set took: 746.545302ms Jul 7 15:29:54.469: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.800882469s Jul 7 15:30:06.763: INFO: Number of nodes with available pods: 0 Jul 7 15:30:06.763: INFO: Number of running nodes: 0, number of available pods: 0 Jul 7 15:30:06.770: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2871/daemonsets","resourceVersion":"926820"},"items":null} Jul 7 15:30:06.772: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2871/pods","resourceVersion":"926820"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:30:06.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2871" for this suite. • [SLOW TEST:34.183 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":9,"skipped":194,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:30:06.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Jul 7 15:30:19.711: INFO: Successfully updated pod "annotationupdatec15a1add-42ef-4ba5-b575-9166cdf91c73" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:30:20.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2062" for this suite. • [SLOW TEST:13.446 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":10,"skipped":199,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:30:20.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jul 7 15:30:22.239: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2493 /api/v1/namespaces/watch-2493/configmaps/e2e-watch-test-configmap-a cc2e29cb-5b0d-4a7e-a544-7ee1f065ed8d 926894 0 2020-07-07 15:30:22 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jul 7 15:30:22.239: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2493 /api/v1/namespaces/watch-2493/configmaps/e2e-watch-test-configmap-a cc2e29cb-5b0d-4a7e-a544-7ee1f065ed8d 926894 0 2020-07-07 15:30:22 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jul 7 15:30:32.246: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2493 /api/v1/namespaces/watch-2493/configmaps/e2e-watch-test-configmap-a cc2e29cb-5b0d-4a7e-a544-7ee1f065ed8d 926933 0 2020-07-07 15:30:22 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jul 7 15:30:32.246: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2493 /api/v1/namespaces/watch-2493/configmaps/e2e-watch-test-configmap-a cc2e29cb-5b0d-4a7e-a544-7ee1f065ed8d 926933 0 2020-07-07 15:30:22 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jul 7 15:30:42.643: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2493 /api/v1/namespaces/watch-2493/configmaps/e2e-watch-test-configmap-a cc2e29cb-5b0d-4a7e-a544-7ee1f065ed8d 926961 0 2020-07-07 15:30:22 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jul 7 15:30:42.644: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2493 /api/v1/namespaces/watch-2493/configmaps/e2e-watch-test-configmap-a cc2e29cb-5b0d-4a7e-a544-7ee1f065ed8d 926961 0 2020-07-07 15:30:22 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jul 7 15:30:52.650: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2493 /api/v1/namespaces/watch-2493/configmaps/e2e-watch-test-configmap-a cc2e29cb-5b0d-4a7e-a544-7ee1f065ed8d 926993 0 2020-07-07 15:30:22 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jul 7 15:30:52.650: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2493 /api/v1/namespaces/watch-2493/configmaps/e2e-watch-test-configmap-a cc2e29cb-5b0d-4a7e-a544-7ee1f065ed8d 926993 0 2020-07-07 15:30:22 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jul 7 15:31:02.658: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2493 /api/v1/namespaces/watch-2493/configmaps/e2e-watch-test-configmap-b 621ce979-1263-4794-97b4-93ca2d019def 927019 0 2020-07-07 15:31:02 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jul 7 15:31:02.658: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2493 /api/v1/namespaces/watch-2493/configmaps/e2e-watch-test-configmap-b 621ce979-1263-4794-97b4-93ca2d019def 927019 0 2020-07-07 15:31:02 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jul 7 15:31:12.664: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2493 /api/v1/namespaces/watch-2493/configmaps/e2e-watch-test-configmap-b 621ce979-1263-4794-97b4-93ca2d019def 927049 0 2020-07-07 15:31:02 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jul 7 15:31:12.664: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2493 /api/v1/namespaces/watch-2493/configmaps/e2e-watch-test-configmap-b 621ce979-1263-4794-97b4-93ca2d019def 927049 0 2020-07-07 15:31:02 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:31:22.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2493" for this suite. • [SLOW TEST:62.623 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":11,"skipped":220,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:31:22.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jul 7 15:31:24.478: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:31:24.551: INFO: Number of nodes with available pods: 0 Jul 7 15:31:24.551: INFO: Node jerma-worker is running more than one daemon pod Jul 7 15:31:25.556: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:31:25.560: INFO: Number of nodes with available pods: 0 Jul 7 15:31:25.560: INFO: Node jerma-worker is running more than one daemon pod Jul 7 15:31:27.599: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:31:28.067: INFO: Number of nodes with available pods: 0 Jul 7 15:31:28.067: INFO: Node jerma-worker is running more than one daemon pod Jul 7 15:31:28.894: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:31:28.897: INFO: Number of nodes with available pods: 0 Jul 7 15:31:28.897: INFO: Node jerma-worker is running more than one daemon pod Jul 7 15:31:29.702: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:31:29.705: INFO: Number of nodes with available pods: 0 Jul 7 15:31:29.705: INFO: Node jerma-worker is running more than one daemon pod Jul 7 15:31:30.774: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:31:30.778: INFO: Number of nodes with available pods: 0 Jul 7 15:31:30.778: INFO: Node jerma-worker is running more than one daemon pod Jul 7 15:31:31.576: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:31:31.580: INFO: Number of nodes with available pods: 0 Jul 7 15:31:31.580: INFO: Node jerma-worker is running more than one daemon pod Jul 7 15:31:32.556: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:31:32.560: INFO: Number of nodes with available pods: 2 Jul 7 15:31:32.560: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jul 7 15:31:32.732: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:31:32.834: INFO: Number of nodes with available pods: 1 Jul 7 15:31:32.834: INFO: Node jerma-worker is running more than one daemon pod Jul 7 15:31:33.839: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:31:33.843: INFO: Number of nodes with available pods: 1 Jul 7 15:31:33.843: INFO: Node jerma-worker is running more than one daemon pod Jul 7 15:31:34.839: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:31:34.842: INFO: Number of nodes with available pods: 1 Jul 7 15:31:34.842: INFO: Node jerma-worker is running more than one daemon pod Jul 7 15:31:35.838: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:31:35.841: INFO: Number of nodes with available pods: 1 Jul 7 15:31:35.841: INFO: Node jerma-worker is running more than one daemon pod Jul 7 15:31:36.839: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:31:36.843: INFO: Number of nodes with available pods: 1 Jul 7 15:31:36.843: INFO: Node jerma-worker is running more than one daemon pod Jul 7 15:31:37.902: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:31:37.905: INFO: Number of nodes with available pods: 1 Jul 7 15:31:37.905: INFO: Node jerma-worker is running more than one daemon pod Jul 7 15:31:38.888: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:31:38.891: INFO: Number of nodes with available pods: 1 Jul 7 15:31:38.891: INFO: Node jerma-worker is running more than one daemon pod Jul 7 15:31:39.839: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:31:39.842: INFO: Number of nodes with available pods: 1 Jul 7 15:31:39.842: INFO: Node jerma-worker is running more than one daemon pod Jul 7 15:31:40.859: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:31:40.886: INFO: Number of nodes with available pods: 1 Jul 7 15:31:40.886: INFO: Node jerma-worker is running more than one daemon pod Jul 7 15:31:41.871: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:31:41.875: INFO: Number of nodes with available pods: 1 Jul 7 15:31:41.875: INFO: Node jerma-worker is running more than one daemon pod Jul 7 15:31:42.839: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:31:42.843: INFO: Number of nodes with available pods: 2 Jul 7 15:31:42.843: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4622, will wait for the garbage collector to delete the pods Jul 7 15:31:42.956: INFO: Deleting DaemonSet.extensions daemon-set took: 56.236596ms Jul 7 15:31:43.257: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.270747ms Jul 7 15:31:56.860: INFO: Number of nodes with available pods: 0 Jul 7 15:31:56.860: INFO: Number of running nodes: 0, number of available pods: 0 Jul 7 15:31:56.862: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4622/daemonsets","resourceVersion":"927228"},"items":null} Jul 7 15:31:56.865: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4622/pods","resourceVersion":"927228"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:31:56.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4622" for this suite. • [SLOW TEST:34.023 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":12,"skipped":235,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:31:56.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-vtvh STEP: Creating a pod to test atomic-volume-subpath Jul 7 15:31:57.026: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-vtvh" in namespace "subpath-5942" to be "success or failure" Jul 7 15:31:57.241: INFO: Pod "pod-subpath-test-configmap-vtvh": Phase="Pending", Reason="", readiness=false. Elapsed: 215.705814ms Jul 7 15:31:59.319: INFO: Pod "pod-subpath-test-configmap-vtvh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.293059928s Jul 7 15:32:01.323: INFO: Pod "pod-subpath-test-configmap-vtvh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.296986841s Jul 7 15:32:03.327: INFO: Pod "pod-subpath-test-configmap-vtvh": Phase="Running", Reason="", readiness=true. Elapsed: 6.301345338s Jul 7 15:32:05.331: INFO: Pod "pod-subpath-test-configmap-vtvh": Phase="Running", Reason="", readiness=true. Elapsed: 8.305525797s Jul 7 15:32:07.336: INFO: Pod "pod-subpath-test-configmap-vtvh": Phase="Running", Reason="", readiness=true. Elapsed: 10.310151923s Jul 7 15:32:09.340: INFO: Pod "pod-subpath-test-configmap-vtvh": Phase="Running", Reason="", readiness=true. Elapsed: 12.313945402s Jul 7 15:32:11.345: INFO: Pod "pod-subpath-test-configmap-vtvh": Phase="Running", Reason="", readiness=true. Elapsed: 14.319154571s Jul 7 15:32:13.348: INFO: Pod "pod-subpath-test-configmap-vtvh": Phase="Running", Reason="", readiness=true. Elapsed: 16.322704116s Jul 7 15:32:15.352: INFO: Pod "pod-subpath-test-configmap-vtvh": Phase="Running", Reason="", readiness=true. Elapsed: 18.326575823s Jul 7 15:32:17.356: INFO: Pod "pod-subpath-test-configmap-vtvh": Phase="Running", Reason="", readiness=true. Elapsed: 20.329955792s Jul 7 15:32:19.531: INFO: Pod "pod-subpath-test-configmap-vtvh": Phase="Running", Reason="", readiness=true. Elapsed: 22.504939511s Jul 7 15:32:21.533: INFO: Pod "pod-subpath-test-configmap-vtvh": Phase="Running", Reason="", readiness=true. Elapsed: 24.507592917s Jul 7 15:32:23.554: INFO: Pod "pod-subpath-test-configmap-vtvh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.528466502s STEP: Saw pod success Jul 7 15:32:23.554: INFO: Pod "pod-subpath-test-configmap-vtvh" satisfied condition "success or failure" Jul 7 15:32:23.557: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-vtvh container test-container-subpath-configmap-vtvh: STEP: delete the pod Jul 7 15:32:23.961: INFO: Waiting for pod pod-subpath-test-configmap-vtvh to disappear Jul 7 15:32:24.224: INFO: Pod pod-subpath-test-configmap-vtvh no longer exists STEP: Deleting pod pod-subpath-test-configmap-vtvh Jul 7 15:32:24.224: INFO: Deleting pod "pod-subpath-test-configmap-vtvh" in namespace "subpath-5942" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:32:24.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5942" for this suite. • [SLOW TEST:27.651 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":13,"skipped":246,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:32:24.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:32:25.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3403" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":14,"skipped":268,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:32:25.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 7 15:32:26.198: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 7 15:32:28.984: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729732746, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729732746, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729732746, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729732746, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 7 15:32:30.988: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729732746, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729732746, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729732746, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729732746, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 7 15:32:34.272: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:32:34.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4750" for this suite. STEP: Destroying namespace "webhook-4750-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.582 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":15,"skipped":270,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:32:34.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:32:46.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6384" for this suite. • [SLOW TEST:11.434 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":16,"skipped":291,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:32:46.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jul 7 15:32:47.179: INFO: Waiting up to 5m0s for pod "downward-api-589ba494-4c1c-4e63-815c-ea61c060079c" in namespace "downward-api-7423" to be "success or failure" Jul 7 15:32:47.235: INFO: Pod "downward-api-589ba494-4c1c-4e63-815c-ea61c060079c": Phase="Pending", Reason="", readiness=false. Elapsed: 55.96556ms Jul 7 15:32:50.044: INFO: Pod "downward-api-589ba494-4c1c-4e63-815c-ea61c060079c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.865862675s Jul 7 15:32:52.191: INFO: Pod "downward-api-589ba494-4c1c-4e63-815c-ea61c060079c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.012371548s Jul 7 15:32:54.206: INFO: Pod "downward-api-589ba494-4c1c-4e63-815c-ea61c060079c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.027286816s Jul 7 15:32:56.471: INFO: Pod "downward-api-589ba494-4c1c-4e63-815c-ea61c060079c": Phase="Running", Reason="", readiness=true. Elapsed: 9.292010503s Jul 7 15:32:58.474: INFO: Pod "downward-api-589ba494-4c1c-4e63-815c-ea61c060079c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.295828089s STEP: Saw pod success Jul 7 15:32:58.474: INFO: Pod "downward-api-589ba494-4c1c-4e63-815c-ea61c060079c" satisfied condition "success or failure" Jul 7 15:32:58.478: INFO: Trying to get logs from node jerma-worker2 pod downward-api-589ba494-4c1c-4e63-815c-ea61c060079c container dapi-container: STEP: delete the pod Jul 7 15:32:58.496: INFO: Waiting for pod downward-api-589ba494-4c1c-4e63-815c-ea61c060079c to disappear Jul 7 15:32:58.526: INFO: Pod downward-api-589ba494-4c1c-4e63-815c-ea61c060079c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:32:58.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7423" for this suite. • [SLOW TEST:12.257 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":17,"skipped":319,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:32:58.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 7 15:32:58.666: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jul 7 15:33:01.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-766 create -f -' Jul 7 15:33:06.863: INFO: stderr: "" Jul 7 15:33:06.863: INFO: stdout: "e2e-test-crd-publish-openapi-6581-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jul 7 15:33:06.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-766 delete e2e-test-crd-publish-openapi-6581-crds test-cr' Jul 7 15:33:06.976: INFO: stderr: "" Jul 7 15:33:06.976: INFO: stdout: "e2e-test-crd-publish-openapi-6581-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Jul 7 15:33:06.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-766 apply -f -' Jul 7 15:33:07.241: INFO: stderr: "" Jul 7 15:33:07.241: INFO: stdout: "e2e-test-crd-publish-openapi-6581-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jul 7 15:33:07.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-766 delete e2e-test-crd-publish-openapi-6581-crds test-cr' Jul 7 15:33:07.366: INFO: stderr: "" Jul 7 15:33:07.366: INFO: stdout: "e2e-test-crd-publish-openapi-6581-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jul 7 15:33:07.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6581-crds' Jul 7 15:33:07.604: INFO: stderr: "" Jul 7 15:33:07.604: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6581-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:33:10.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-766" for this suite. • [SLOW TEST:12.045 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":18,"skipped":322,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:33:10.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jul 7 15:33:19.505: INFO: 0 pods remaining Jul 7 15:33:19.505: INFO: 0 pods has nil DeletionTimestamp Jul 7 15:33:19.505: INFO: STEP: Gathering metrics W0707 15:33:21.052901 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 7 15:33:21.052: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:33:21.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9038" for this suite. • [SLOW TEST:10.480 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":19,"skipped":327,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:33:21.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Jul 7 15:33:23.657: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. Jul 7 15:33:25.649: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jul 7 15:33:31.432: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729732805, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729732805, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729732806, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729732805, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 7 15:33:33.757: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729732805, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729732805, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729732806, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729732805, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 7 15:33:35.524: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729732805, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729732805, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729732806, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729732805, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 7 15:33:37.958: INFO: Waited 517.003776ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:33:43.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-8834" for this suite. • [SLOW TEST:22.182 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":20,"skipped":368,"failed":0} SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:33:43.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-3548 STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 7 15:33:44.102: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jul 7 15:34:21.217: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.207 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3548 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 7 15:34:21.217: INFO: >>> kubeConfig: /root/.kube/config I0707 15:34:21.247849 6 log.go:172] (0xc00504e9a0) (0xc002ac9040) Create stream I0707 15:34:21.247885 6 log.go:172] (0xc00504e9a0) (0xc002ac9040) Stream added, broadcasting: 1 I0707 15:34:21.249857 6 log.go:172] (0xc00504e9a0) Reply frame received for 1 I0707 15:34:21.249896 6 log.go:172] (0xc00504e9a0) (0xc0027e9040) Create stream I0707 15:34:21.249910 6 log.go:172] (0xc00504e9a0) (0xc0027e9040) Stream added, broadcasting: 3 I0707 15:34:21.250790 6 log.go:172] (0xc00504e9a0) Reply frame received for 3 I0707 15:34:21.250817 6 log.go:172] (0xc00504e9a0) (0xc002ac90e0) Create stream I0707 15:34:21.250828 6 log.go:172] (0xc00504e9a0) (0xc002ac90e0) Stream added, broadcasting: 5 I0707 15:34:21.251688 6 log.go:172] (0xc00504e9a0) Reply frame received for 5 I0707 15:34:22.417887 6 log.go:172] (0xc00504e9a0) Data frame received for 5 I0707 15:34:22.417927 6 log.go:172] (0xc002ac90e0) (5) Data frame handling I0707 15:34:22.417950 6 log.go:172] (0xc00504e9a0) Data frame received for 3 I0707 15:34:22.417970 6 log.go:172] (0xc0027e9040) (3) Data frame handling I0707 15:34:22.417982 6 log.go:172] (0xc0027e9040) (3) Data frame sent I0707 15:34:22.418015 6 log.go:172] (0xc00504e9a0) Data frame received for 3 I0707 15:34:22.418079 6 log.go:172] (0xc0027e9040) (3) Data frame handling I0707 15:34:22.420154 6 log.go:172] (0xc00504e9a0) Data frame received for 1 I0707 15:34:22.420195 6 log.go:172] (0xc002ac9040) (1) Data frame handling I0707 15:34:22.420236 6 log.go:172] (0xc002ac9040) (1) Data frame sent I0707 15:34:22.420262 6 log.go:172] (0xc00504e9a0) (0xc002ac9040) Stream removed, broadcasting: 1 I0707 15:34:22.420293 6 log.go:172] (0xc00504e9a0) Go away received I0707 15:34:22.420742 6 log.go:172] (0xc00504e9a0) (0xc002ac9040) Stream removed, broadcasting: 1 I0707 15:34:22.420768 6 log.go:172] (0xc00504e9a0) (0xc0027e9040) Stream removed, broadcasting: 3 I0707 15:34:22.420779 6 log.go:172] (0xc00504e9a0) (0xc002ac90e0) Stream removed, broadcasting: 5 Jul 7 15:34:22.420: INFO: Found all expected endpoints: [netserver-0] Jul 7 15:34:22.423: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.167 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3548 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 7 15:34:22.423: INFO: >>> kubeConfig: /root/.kube/config I0707 15:34:22.459887 6 log.go:172] (0xc00504ef20) (0xc002ac92c0) Create stream I0707 15:34:22.459912 6 log.go:172] (0xc00504ef20) (0xc002ac92c0) Stream added, broadcasting: 1 I0707 15:34:22.462113 6 log.go:172] (0xc00504ef20) Reply frame received for 1 I0707 15:34:22.462140 6 log.go:172] (0xc00504ef20) (0xc002ac9360) Create stream I0707 15:34:22.462153 6 log.go:172] (0xc00504ef20) (0xc002ac9360) Stream added, broadcasting: 3 I0707 15:34:22.462989 6 log.go:172] (0xc00504ef20) Reply frame received for 3 I0707 15:34:22.463012 6 log.go:172] (0xc00504ef20) (0xc0026dd5e0) Create stream I0707 15:34:22.463022 6 log.go:172] (0xc00504ef20) (0xc0026dd5e0) Stream added, broadcasting: 5 I0707 15:34:22.463896 6 log.go:172] (0xc00504ef20) Reply frame received for 5 I0707 15:34:23.532647 6 log.go:172] (0xc00504ef20) Data frame received for 3 I0707 15:34:23.532693 6 log.go:172] (0xc002ac9360) (3) Data frame handling I0707 15:34:23.532770 6 log.go:172] (0xc002ac9360) (3) Data frame sent I0707 15:34:23.533335 6 log.go:172] (0xc00504ef20) Data frame received for 5 I0707 15:34:23.533374 6 log.go:172] (0xc0026dd5e0) (5) Data frame handling I0707 15:34:23.533568 6 log.go:172] (0xc00504ef20) Data frame received for 3 I0707 15:34:23.533595 6 log.go:172] (0xc002ac9360) (3) Data frame handling I0707 15:34:23.534970 6 log.go:172] (0xc00504ef20) Data frame received for 1 I0707 15:34:23.534998 6 log.go:172] (0xc002ac92c0) (1) Data frame handling I0707 15:34:23.535028 6 log.go:172] (0xc002ac92c0) (1) Data frame sent I0707 15:34:23.535055 6 log.go:172] (0xc00504ef20) (0xc002ac92c0) Stream removed, broadcasting: 1 I0707 15:34:23.535181 6 log.go:172] (0xc00504ef20) (0xc002ac92c0) Stream removed, broadcasting: 1 I0707 15:34:23.535216 6 log.go:172] (0xc00504ef20) (0xc002ac9360) Stream removed, broadcasting: 3 I0707 15:34:23.535309 6 log.go:172] (0xc00504ef20) (0xc0026dd5e0) Stream removed, broadcasting: 5 Jul 7 15:34:23.535: INFO: Found all expected endpoints: [netserver-1] I0707 15:34:23.535394 6 log.go:172] (0xc00504ef20) Go away received [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:34:23.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3548" for this suite. • [SLOW TEST:40.303 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":374,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:34:23.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-247b0eed-d222-4c50-964e-779f3737d38a STEP: Creating a pod to test consume configMaps Jul 7 15:34:23.665: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-72a2dd5e-a05e-492f-9615-1b537bfd5192" in namespace "projected-2680" to be "success or failure" Jul 7 15:34:23.681: INFO: Pod "pod-projected-configmaps-72a2dd5e-a05e-492f-9615-1b537bfd5192": Phase="Pending", Reason="", readiness=false. Elapsed: 15.66263ms Jul 7 15:34:25.728: INFO: Pod "pod-projected-configmaps-72a2dd5e-a05e-492f-9615-1b537bfd5192": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062878104s Jul 7 15:34:27.967: INFO: Pod "pod-projected-configmaps-72a2dd5e-a05e-492f-9615-1b537bfd5192": Phase="Running", Reason="", readiness=true. Elapsed: 4.301418099s Jul 7 15:34:29.972: INFO: Pod "pod-projected-configmaps-72a2dd5e-a05e-492f-9615-1b537bfd5192": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.306481252s STEP: Saw pod success Jul 7 15:34:29.972: INFO: Pod "pod-projected-configmaps-72a2dd5e-a05e-492f-9615-1b537bfd5192" satisfied condition "success or failure" Jul 7 15:34:30.002: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-72a2dd5e-a05e-492f-9615-1b537bfd5192 container projected-configmap-volume-test: STEP: delete the pod Jul 7 15:34:31.112: INFO: Waiting for pod pod-projected-configmaps-72a2dd5e-a05e-492f-9615-1b537bfd5192 to disappear Jul 7 15:34:31.568: INFO: Pod pod-projected-configmaps-72a2dd5e-a05e-492f-9615-1b537bfd5192 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:34:31.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2680" for this suite. • [SLOW TEST:8.383 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":400,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:34:31.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 7 15:34:34.874: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 7 15:34:36.951: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729732874, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729732874, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729732875, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729732874, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 7 15:34:38.955: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729732874, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729732874, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729732875, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729732874, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 7 15:34:42.071: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:34:42.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6269" for this suite. STEP: Destroying namespace "webhook-6269-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.560 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":23,"skipped":400,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:34:42.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8276.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8276.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 7 15:34:55.273: INFO: DNS probes using dns-8276/dns-test-e5b99d79-276e-40b1-a081-40f2c4ec50ea succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:34:55.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8276" for this suite. • [SLOW TEST:13.054 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":24,"skipped":412,"failed":0} SSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:34:55.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5378.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-5378.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5378.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5378.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-5378.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5378.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 7 15:35:11.630: INFO: DNS probes using dns-5378/dns-test-9c366444-17fb-4ad9-ba97-0f50fc1af605 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:35:12.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5378" for this suite. • [SLOW TEST:16.862 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":25,"skipped":421,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:35:12.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Jul 7 15:35:23.308: INFO: Successfully updated pod "annotationupdate98bc25ba-c024-4c23-8186-4fbf9335ca99" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:35:25.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8980" for this suite. • [SLOW TEST:13.191 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":26,"skipped":462,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:35:25.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:35:30.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-618" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":27,"skipped":468,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:35:30.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jul 7 15:35:35.782: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:35:35.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5283" for this suite. • [SLOW TEST:5.533 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":28,"skipped":479,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:35:35.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Jul 7 15:35:42.714: INFO: Successfully updated pod "labelsupdate94c7c6be-0e55-4345-adbb-a6351513c762" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:35:44.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2947" for this suite. • [SLOW TEST:8.902 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":29,"skipped":489,"failed":0} [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:35:44.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-1a15b947-6811-4d4f-af70-a9fdebabd119 STEP: Creating a pod to test consume secrets Jul 7 15:35:45.120: INFO: Waiting up to 5m0s for pod "pod-secrets-0045bb6f-3b3e-4e71-abae-bd3c6ac1bc45" in namespace "secrets-3678" to be "success or failure" Jul 7 15:35:45.196: INFO: Pod "pod-secrets-0045bb6f-3b3e-4e71-abae-bd3c6ac1bc45": Phase="Pending", Reason="", readiness=false. Elapsed: 75.761044ms Jul 7 15:35:47.200: INFO: Pod "pod-secrets-0045bb6f-3b3e-4e71-abae-bd3c6ac1bc45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079960048s Jul 7 15:35:49.204: INFO: Pod "pod-secrets-0045bb6f-3b3e-4e71-abae-bd3c6ac1bc45": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083429493s Jul 7 15:35:51.613: INFO: Pod "pod-secrets-0045bb6f-3b3e-4e71-abae-bd3c6ac1bc45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.492438219s STEP: Saw pod success Jul 7 15:35:51.613: INFO: Pod "pod-secrets-0045bb6f-3b3e-4e71-abae-bd3c6ac1bc45" satisfied condition "success or failure" Jul 7 15:35:51.615: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-0045bb6f-3b3e-4e71-abae-bd3c6ac1bc45 container secret-env-test: STEP: delete the pod Jul 7 15:35:51.845: INFO: Waiting for pod pod-secrets-0045bb6f-3b3e-4e71-abae-bd3c6ac1bc45 to disappear Jul 7 15:35:52.280: INFO: Pod pod-secrets-0045bb6f-3b3e-4e71-abae-bd3c6ac1bc45 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:35:52.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3678" for this suite. • [SLOW TEST:7.467 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":489,"failed":0} SSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:35:52.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 7 15:36:16.131: INFO: Container started at 2020-07-07 15:35:59 +0000 UTC, pod became ready at 2020-07-07 15:36:16 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:36:16.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-260" for this suite. • [SLOW TEST:23.852 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":31,"skipped":492,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:36:16.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-1912 STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 7 15:36:16.187: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jul 7 15:36:46.465: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.211:8080/dial?request=hostname&protocol=udp&host=10.244.1.210&port=8081&tries=1'] Namespace:pod-network-test-1912 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 7 15:36:46.465: INFO: >>> kubeConfig: /root/.kube/config I0707 15:36:46.489103 6 log.go:172] (0xc001446d10) (0xc001dc0a00) Create stream I0707 15:36:46.489330 6 log.go:172] (0xc001446d10) (0xc001dc0a00) Stream added, broadcasting: 1 I0707 15:36:46.490801 6 log.go:172] (0xc001446d10) Reply frame received for 1 I0707 15:36:46.490826 6 log.go:172] (0xc001446d10) (0xc000a82820) Create stream I0707 15:36:46.490834 6 log.go:172] (0xc001446d10) (0xc000a82820) Stream added, broadcasting: 3 I0707 15:36:46.491521 6 log.go:172] (0xc001446d10) Reply frame received for 3 I0707 15:36:46.491547 6 log.go:172] (0xc001446d10) (0xc000d48500) Create stream I0707 15:36:46.491556 6 log.go:172] (0xc001446d10) (0xc000d48500) Stream added, broadcasting: 5 I0707 15:36:46.492254 6 log.go:172] (0xc001446d10) Reply frame received for 5 I0707 15:36:46.921481 6 log.go:172] (0xc001446d10) Data frame received for 3 I0707 15:36:46.921528 6 log.go:172] (0xc000a82820) (3) Data frame handling I0707 15:36:46.921564 6 log.go:172] (0xc000a82820) (3) Data frame sent I0707 15:36:46.922871 6 log.go:172] (0xc001446d10) Data frame received for 5 I0707 15:36:46.922936 6 log.go:172] (0xc000d48500) (5) Data frame handling I0707 15:36:46.922986 6 log.go:172] (0xc001446d10) Data frame received for 3 I0707 15:36:46.923025 6 log.go:172] (0xc000a82820) (3) Data frame handling I0707 15:36:46.925010 6 log.go:172] (0xc001446d10) Data frame received for 1 I0707 15:36:46.925024 6 log.go:172] (0xc001dc0a00) (1) Data frame handling I0707 15:36:46.925030 6 log.go:172] (0xc001dc0a00) (1) Data frame sent I0707 15:36:46.925328 6 log.go:172] (0xc001446d10) (0xc001dc0a00) Stream removed, broadcasting: 1 I0707 15:36:46.925362 6 log.go:172] (0xc001446d10) Go away received I0707 15:36:46.925402 6 log.go:172] (0xc001446d10) (0xc001dc0a00) Stream removed, broadcasting: 1 I0707 15:36:46.925418 6 log.go:172] (0xc001446d10) (0xc000a82820) Stream removed, broadcasting: 3 I0707 15:36:46.925425 6 log.go:172] (0xc001446d10) (0xc000d48500) Stream removed, broadcasting: 5 Jul 7 15:36:46.925: INFO: Waiting for responses: map[] Jul 7 15:36:46.928: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.211:8080/dial?request=hostname&protocol=udp&host=10.244.2.176&port=8081&tries=1'] Namespace:pod-network-test-1912 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 7 15:36:46.929: INFO: >>> kubeConfig: /root/.kube/config I0707 15:36:46.960453 6 log.go:172] (0xc002398840) (0xc001dc0e60) Create stream I0707 15:36:46.960487 6 log.go:172] (0xc002398840) (0xc001dc0e60) Stream added, broadcasting: 1 I0707 15:36:46.962587 6 log.go:172] (0xc002398840) Reply frame received for 1 I0707 15:36:46.962620 6 log.go:172] (0xc002398840) (0xc001dc0f00) Create stream I0707 15:36:46.962634 6 log.go:172] (0xc002398840) (0xc001dc0f00) Stream added, broadcasting: 3 I0707 15:36:46.963538 6 log.go:172] (0xc002398840) Reply frame received for 3 I0707 15:36:46.963607 6 log.go:172] (0xc002398840) (0xc0022c0000) Create stream I0707 15:36:46.963626 6 log.go:172] (0xc002398840) (0xc0022c0000) Stream added, broadcasting: 5 I0707 15:36:46.964593 6 log.go:172] (0xc002398840) Reply frame received for 5 I0707 15:36:47.024187 6 log.go:172] (0xc002398840) Data frame received for 3 I0707 15:36:47.024218 6 log.go:172] (0xc001dc0f00) (3) Data frame handling I0707 15:36:47.024238 6 log.go:172] (0xc001dc0f00) (3) Data frame sent I0707 15:36:47.024551 6 log.go:172] (0xc002398840) Data frame received for 3 I0707 15:36:47.024581 6 log.go:172] (0xc001dc0f00) (3) Data frame handling I0707 15:36:47.024660 6 log.go:172] (0xc002398840) Data frame received for 5 I0707 15:36:47.024678 6 log.go:172] (0xc0022c0000) (5) Data frame handling I0707 15:36:47.026377 6 log.go:172] (0xc002398840) Data frame received for 1 I0707 15:36:47.026390 6 log.go:172] (0xc001dc0e60) (1) Data frame handling I0707 15:36:47.026397 6 log.go:172] (0xc001dc0e60) (1) Data frame sent I0707 15:36:47.026408 6 log.go:172] (0xc002398840) (0xc001dc0e60) Stream removed, broadcasting: 1 I0707 15:36:47.026422 6 log.go:172] (0xc002398840) Go away received I0707 15:36:47.026603 6 log.go:172] (0xc002398840) (0xc001dc0e60) Stream removed, broadcasting: 1 I0707 15:36:47.026615 6 log.go:172] (0xc002398840) (0xc001dc0f00) Stream removed, broadcasting: 3 I0707 15:36:47.026622 6 log.go:172] (0xc002398840) (0xc0022c0000) Stream removed, broadcasting: 5 Jul 7 15:36:47.026: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:36:47.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1912" for this suite. • [SLOW TEST:30.895 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":32,"skipped":508,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:36:47.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Jul 7 15:36:47.358: INFO: Waiting up to 5m0s for pod "pod-c203d26d-80d3-4621-b5d4-7c7ede083ab7" in namespace "emptydir-4678" to be "success or failure" Jul 7 15:36:47.411: INFO: Pod "pod-c203d26d-80d3-4621-b5d4-7c7ede083ab7": Phase="Pending", Reason="", readiness=false. Elapsed: 52.415884ms Jul 7 15:36:49.422: INFO: Pod "pod-c203d26d-80d3-4621-b5d4-7c7ede083ab7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06374835s Jul 7 15:36:51.471: INFO: Pod "pod-c203d26d-80d3-4621-b5d4-7c7ede083ab7": Phase="Running", Reason="", readiness=true. Elapsed: 4.112429422s Jul 7 15:36:53.629: INFO: Pod "pod-c203d26d-80d3-4621-b5d4-7c7ede083ab7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.270537598s STEP: Saw pod success Jul 7 15:36:53.629: INFO: Pod "pod-c203d26d-80d3-4621-b5d4-7c7ede083ab7" satisfied condition "success or failure" Jul 7 15:36:53.671: INFO: Trying to get logs from node jerma-worker pod pod-c203d26d-80d3-4621-b5d4-7c7ede083ab7 container test-container: STEP: delete the pod Jul 7 15:36:53.994: INFO: Waiting for pod pod-c203d26d-80d3-4621-b5d4-7c7ede083ab7 to disappear Jul 7 15:36:54.299: INFO: Pod pod-c203d26d-80d3-4621-b5d4-7c7ede083ab7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:36:54.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4678" for this suite. • [SLOW TEST:7.493 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":33,"skipped":520,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:36:54.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-7348 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-7348 Jul 7 15:36:57.168: INFO: Found 0 stateful pods, waiting for 1 Jul 7 15:37:07.171: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jul 7 15:37:07.204: INFO: Deleting all statefulset in ns statefulset-7348 Jul 7 15:37:07.273: INFO: Scaling statefulset ss to 0 Jul 7 15:37:27.411: INFO: Waiting for statefulset status.replicas updated to 0 Jul 7 15:37:27.414: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:37:27.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7348" for this suite. • [SLOW TEST:32.946 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":34,"skipped":570,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:37:27.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-93adf763-d6c5-4dfb-a083-f516a9e6a7a3 STEP: Creating a pod to test consume configMaps Jul 7 15:37:27.550: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fb2dea02-c116-4c7a-8ecb-10b35b492ed0" in namespace "projected-5405" to be "success or failure" Jul 7 15:37:27.611: INFO: Pod "pod-projected-configmaps-fb2dea02-c116-4c7a-8ecb-10b35b492ed0": Phase="Pending", Reason="", readiness=false. Elapsed: 60.697412ms Jul 7 15:37:29.851: INFO: Pod "pod-projected-configmaps-fb2dea02-c116-4c7a-8ecb-10b35b492ed0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.300908117s Jul 7 15:37:31.856: INFO: Pod "pod-projected-configmaps-fb2dea02-c116-4c7a-8ecb-10b35b492ed0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.305438899s Jul 7 15:37:33.965: INFO: Pod "pod-projected-configmaps-fb2dea02-c116-4c7a-8ecb-10b35b492ed0": Phase="Running", Reason="", readiness=true. Elapsed: 6.414911858s Jul 7 15:37:35.969: INFO: Pod "pod-projected-configmaps-fb2dea02-c116-4c7a-8ecb-10b35b492ed0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.419063527s STEP: Saw pod success Jul 7 15:37:35.970: INFO: Pod "pod-projected-configmaps-fb2dea02-c116-4c7a-8ecb-10b35b492ed0" satisfied condition "success or failure" Jul 7 15:37:35.973: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-fb2dea02-c116-4c7a-8ecb-10b35b492ed0 container projected-configmap-volume-test: STEP: delete the pod Jul 7 15:37:36.325: INFO: Waiting for pod pod-projected-configmaps-fb2dea02-c116-4c7a-8ecb-10b35b492ed0 to disappear Jul 7 15:37:36.390: INFO: Pod pod-projected-configmaps-fb2dea02-c116-4c7a-8ecb-10b35b492ed0 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:37:36.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5405" for this suite. • [SLOW TEST:8.923 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":35,"skipped":590,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:37:36.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Jul 7 15:37:36.458: INFO: Waiting up to 5m0s for pod "pod-f276fc73-fa4d-44fe-8182-54b4d4355ff7" in namespace "emptydir-4904" to be "success or failure" Jul 7 15:37:36.462: INFO: Pod "pod-f276fc73-fa4d-44fe-8182-54b4d4355ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.111422ms Jul 7 15:37:38.737: INFO: Pod "pod-f276fc73-fa4d-44fe-8182-54b4d4355ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.278669126s Jul 7 15:37:40.741: INFO: Pod "pod-f276fc73-fa4d-44fe-8182-54b4d4355ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.282535153s Jul 7 15:37:42.745: INFO: Pod "pod-f276fc73-fa4d-44fe-8182-54b4d4355ff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.28668618s STEP: Saw pod success Jul 7 15:37:42.745: INFO: Pod "pod-f276fc73-fa4d-44fe-8182-54b4d4355ff7" satisfied condition "success or failure" Jul 7 15:37:42.748: INFO: Trying to get logs from node jerma-worker2 pod pod-f276fc73-fa4d-44fe-8182-54b4d4355ff7 container test-container: STEP: delete the pod Jul 7 15:37:42.781: INFO: Waiting for pod pod-f276fc73-fa4d-44fe-8182-54b4d4355ff7 to disappear Jul 7 15:37:42.785: INFO: Pod pod-f276fc73-fa4d-44fe-8182-54b4d4355ff7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:37:42.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4904" for this suite. • [SLOW TEST:6.394 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":36,"skipped":598,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:37:42.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 7 15:37:42.886: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jul 7 15:37:48.085: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jul 7 15:37:48.085: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jul 7 15:37:50.690: INFO: Creating deployment "test-rollover-deployment" Jul 7 15:37:51.019: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jul 7 15:37:53.027: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jul 7 15:37:53.031: INFO: Ensure that both replica sets have 1 created replica Jul 7 15:37:53.143: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jul 7 15:37:53.200: INFO: Updating deployment test-rollover-deployment Jul 7 15:37:53.200: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jul 7 15:37:55.895: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jul 7 15:37:57.519: INFO: Make sure deployment "test-rollover-deployment" is complete Jul 7 15:37:58.206: INFO: all replica sets need to contain the pod-template-hash label Jul 7 15:37:58.206: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733071, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733071, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733075, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733071, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 7 15:38:00.344: INFO: all replica sets need to contain the pod-template-hash label Jul 7 15:38:00.344: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733071, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733071, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733075, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733071, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 7 15:38:02.273: INFO: all replica sets need to contain the pod-template-hash label Jul 7 15:38:02.273: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733071, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733071, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733082, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733071, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 7 15:38:04.459: INFO: all replica sets need to contain the pod-template-hash label Jul 7 15:38:04.459: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733071, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733071, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733082, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733071, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 7 15:38:06.214: INFO: all replica sets need to contain the pod-template-hash label Jul 7 15:38:06.214: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733071, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733071, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733082, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733071, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 7 15:38:08.262: INFO: all replica sets need to contain the pod-template-hash label Jul 7 15:38:08.262: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733071, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733071, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733082, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733071, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 7 15:38:10.294: INFO: all replica sets need to contain the pod-template-hash label Jul 7 15:38:10.294: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733071, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733071, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733082, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733071, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 7 15:38:12.403: INFO: Jul 7 15:38:12.403: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733071, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733071, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733092, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733071, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 7 15:38:14.214: INFO: Jul 7 15:38:14.214: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jul 7 15:38:14.221: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-7220 /apis/apps/v1/namespaces/deployment-7220/deployments/test-rollover-deployment 050fd731-efd7-49a0-bd93-3d9858cc6c15 929407 2 2020-07-07 15:37:50 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001a7c238 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-07-07 15:37:51 +0000 UTC,LastTransitionTime:2020-07-07 15:37:51 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-07-07 15:38:12 +0000 UTC,LastTransitionTime:2020-07-07 15:37:51 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jul 7 15:38:14.224: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-7220 /apis/apps/v1/namespaces/deployment-7220/replicasets/test-rollover-deployment-574d6dfbff f5ab5cf7-6ba1-4881-951d-6d60f5b5466d 929393 2 2020-07-07 15:37:53 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 050fd731-efd7-49a0-bd93-3d9858cc6c15 0xc001a7c6b7 0xc001a7c6b8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001a7c728 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jul 7 15:38:14.224: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jul 7 15:38:14.224: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-7220 /apis/apps/v1/namespaces/deployment-7220/replicasets/test-rollover-controller 25571187-e7ec-44ad-9f97-d42b286dba8f 929406 2 2020-07-07 15:37:42 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 050fd731-efd7-49a0-bd93-3d9858cc6c15 0xc001a7c5d7 0xc001a7c5d8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc001a7c638 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 7 15:38:14.224: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-7220 /apis/apps/v1/namespaces/deployment-7220/replicasets/test-rollover-deployment-f6c94f66c c854423f-004e-48cf-bbe4-f2aa5a00ef6b 929337 2 2020-07-07 15:37:51 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 050fd731-efd7-49a0-bd93-3d9858cc6c15 0xc001a7c790 0xc001a7c791}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001a7c808 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 7 15:38:14.227: INFO: Pod "test-rollover-deployment-574d6dfbff-5zq6n" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-5zq6n test-rollover-deployment-574d6dfbff- deployment-7220 /api/v1/namespaces/deployment-7220/pods/test-rollover-deployment-574d6dfbff-5zq6n 4c33f206-d80e-4824-a3db-373861da4d7e 929362 0 2020-07-07 15:37:53 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff f5ab5cf7-6ba1-4881-951d-6d60f5b5466d 0xc001a7cd67 0xc001a7cd68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qkbxc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qkbxc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qkbxc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 15:37:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 15:38:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 15:38:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 15:37:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.179,StartTime:2020-07-07 15:37:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-07 15:38:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://d7b39a8eeac9913f9781d636cd3535f69bab9cba928c4dc95290b422c578edfe,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.179,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:38:14.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7220" for this suite. • [SLOW TEST:31.441 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":37,"skipped":616,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:38:14.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Jul 7 15:38:14.806: INFO: Waiting up to 5m0s for pod "pod-db94cba5-a5f9-4145-bb4a-03f93ee87bcf" in namespace "emptydir-6351" to be "success or failure" Jul 7 15:38:14.864: INFO: Pod "pod-db94cba5-a5f9-4145-bb4a-03f93ee87bcf": Phase="Pending", Reason="", readiness=false. Elapsed: 58.034953ms Jul 7 15:38:16.893: INFO: Pod "pod-db94cba5-a5f9-4145-bb4a-03f93ee87bcf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087319052s Jul 7 15:38:19.325: INFO: Pod "pod-db94cba5-a5f9-4145-bb4a-03f93ee87bcf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.518524122s Jul 7 15:38:21.348: INFO: Pod "pod-db94cba5-a5f9-4145-bb4a-03f93ee87bcf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.542277088s STEP: Saw pod success Jul 7 15:38:21.348: INFO: Pod "pod-db94cba5-a5f9-4145-bb4a-03f93ee87bcf" satisfied condition "success or failure" Jul 7 15:38:21.396: INFO: Trying to get logs from node jerma-worker pod pod-db94cba5-a5f9-4145-bb4a-03f93ee87bcf container test-container: STEP: delete the pod Jul 7 15:38:21.443: INFO: Waiting for pod pod-db94cba5-a5f9-4145-bb4a-03f93ee87bcf to disappear Jul 7 15:38:21.528: INFO: Pod pod-db94cba5-a5f9-4145-bb4a-03f93ee87bcf no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:38:21.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6351" for this suite. • [SLOW TEST:7.342 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":38,"skipped":656,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:38:21.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-4896 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4896 to expose endpoints map[] Jul 7 15:38:22.379: INFO: successfully validated that service multi-endpoint-test in namespace services-4896 exposes endpoints map[] (270.292387ms elapsed) STEP: Creating pod pod1 in namespace services-4896 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4896 to expose endpoints map[pod1:[100]] Jul 7 15:38:27.138: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.740017693s elapsed, will retry) Jul 7 15:38:28.146: INFO: successfully validated that service multi-endpoint-test in namespace services-4896 exposes endpoints map[pod1:[100]] (5.747861393s elapsed) STEP: Creating pod pod2 in namespace services-4896 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4896 to expose endpoints map[pod1:[100] pod2:[101]] Jul 7 15:38:32.840: INFO: Unexpected endpoints: found map[cba835d9-167f-423e-bac7-4620caea6779:[100]], expected map[pod1:[100] pod2:[101]] (4.689948614s elapsed, will retry) Jul 7 15:38:33.856: INFO: successfully validated that service multi-endpoint-test in namespace services-4896 exposes endpoints map[pod1:[100] pod2:[101]] (5.706569619s elapsed) STEP: Deleting pod pod1 in namespace services-4896 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4896 to expose endpoints map[pod2:[101]] Jul 7 15:38:34.912: INFO: successfully validated that service multi-endpoint-test in namespace services-4896 exposes endpoints map[pod2:[101]] (1.05034973s elapsed) STEP: Deleting pod pod2 in namespace services-4896 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4896 to expose endpoints map[] Jul 7 15:38:36.287: INFO: successfully validated that service multi-endpoint-test in namespace services-4896 exposes endpoints map[] (1.370684562s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:38:36.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4896" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:14.750 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":39,"skipped":681,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:38:36.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-12f4fccc-3772-44d2-9590-c54f642a8651 in namespace container-probe-7721 Jul 7 15:38:42.449: INFO: Started pod liveness-12f4fccc-3772-44d2-9590-c54f642a8651 in namespace container-probe-7721 STEP: checking the pod's current state and verifying that restartCount is present Jul 7 15:38:42.453: INFO: Initial restart count of pod liveness-12f4fccc-3772-44d2-9590-c54f642a8651 is 0 Jul 7 15:39:05.529: INFO: Restart count of pod container-probe-7721/liveness-12f4fccc-3772-44d2-9590-c54f642a8651 is now 1 (23.076670075s elapsed) Jul 7 15:39:19.939: INFO: Restart count of pod container-probe-7721/liveness-12f4fccc-3772-44d2-9590-c54f642a8651 is now 2 (37.486555783s elapsed) Jul 7 15:39:44.890: INFO: Restart count of pod container-probe-7721/liveness-12f4fccc-3772-44d2-9590-c54f642a8651 is now 3 (1m2.437065474s elapsed) Jul 7 15:40:03.158: INFO: Restart count of pod container-probe-7721/liveness-12f4fccc-3772-44d2-9590-c54f642a8651 is now 4 (1m20.705523415s elapsed) Jul 7 15:41:15.067: INFO: Restart count of pod container-probe-7721/liveness-12f4fccc-3772-44d2-9590-c54f642a8651 is now 5 (2m32.61390961s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:41:15.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7721" for this suite. • [SLOW TEST:159.544 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":704,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:41:15.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:41:24.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8177" for this suite. • [SLOW TEST:8.511 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":41,"skipped":719,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:41:24.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 7 15:41:25.970: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 7 15:41:27.978: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733286, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733286, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733287, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733285, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 7 15:41:30.250: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733286, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733286, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733287, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733285, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 7 15:41:32.191: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733286, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733286, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733287, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733285, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 7 15:41:34.149: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733286, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733286, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733287, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733285, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 7 15:41:37.718: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 7 15:41:37.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5930-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:41:40.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3087" for this suite. STEP: Destroying namespace "webhook-3087-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.963 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":42,"skipped":734,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:41:41.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3224.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3224.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3224.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3224.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3224.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3224.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3224.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3224.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3224.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3224.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3224.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 175.15.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.15.175_udp@PTR;check="$$(dig +tcp +noall +answer +search 175.15.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.15.175_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3224.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3224.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3224.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3224.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3224.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3224.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3224.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3224.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3224.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3224.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3224.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 175.15.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.15.175_udp@PTR;check="$$(dig +tcp +noall +answer +search 175.15.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.15.175_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 7 15:41:55.431: INFO: Unable to read wheezy_udp@dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:41:55.434: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:41:55.437: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:41:55.440: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:41:55.464: INFO: Unable to read jessie_udp@dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:41:55.467: INFO: Unable to read jessie_tcp@dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:41:55.469: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:41:55.471: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:41:55.486: INFO: Lookups using dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6 failed for: [wheezy_udp@dns-test-service.dns-3224.svc.cluster.local wheezy_tcp@dns-test-service.dns-3224.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local jessie_udp@dns-test-service.dns-3224.svc.cluster.local jessie_tcp@dns-test-service.dns-3224.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local] Jul 7 15:42:00.491: INFO: Unable to read wheezy_udp@dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:42:00.494: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:42:00.497: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:42:00.500: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:42:00.539: INFO: Unable to read jessie_udp@dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:42:00.542: INFO: Unable to read jessie_tcp@dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:42:00.544: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:42:00.547: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:42:00.568: INFO: Lookups using dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6 failed for: [wheezy_udp@dns-test-service.dns-3224.svc.cluster.local wheezy_tcp@dns-test-service.dns-3224.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local jessie_udp@dns-test-service.dns-3224.svc.cluster.local jessie_tcp@dns-test-service.dns-3224.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local] Jul 7 15:42:05.616: INFO: Unable to read wheezy_udp@dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:42:05.619: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:42:05.621: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:42:05.623: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:42:05.700: INFO: Unable to read jessie_udp@dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:42:05.702: INFO: Unable to read jessie_tcp@dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:42:05.704: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:42:05.706: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:42:05.720: INFO: Lookups using dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6 failed for: [wheezy_udp@dns-test-service.dns-3224.svc.cluster.local wheezy_tcp@dns-test-service.dns-3224.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local jessie_udp@dns-test-service.dns-3224.svc.cluster.local jessie_tcp@dns-test-service.dns-3224.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local] Jul 7 15:42:10.490: INFO: Unable to read wheezy_udp@dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:42:10.492: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:42:10.497: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:42:10.500: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:42:10.874: INFO: Unable to read jessie_udp@dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:42:10.877: INFO: Unable to read jessie_tcp@dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:42:10.879: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:42:10.882: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:42:10.898: INFO: Lookups using dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6 failed for: [wheezy_udp@dns-test-service.dns-3224.svc.cluster.local wheezy_tcp@dns-test-service.dns-3224.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local jessie_udp@dns-test-service.dns-3224.svc.cluster.local jessie_tcp@dns-test-service.dns-3224.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local] Jul 7 15:42:15.491: INFO: Unable to read wheezy_udp@dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:42:15.494: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:42:15.496: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:42:15.499: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:42:15.675: INFO: Unable to read jessie_udp@dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:42:15.678: INFO: Unable to read jessie_tcp@dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:42:15.681: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:42:15.683: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:42:15.903: INFO: Lookups using dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6 failed for: [wheezy_udp@dns-test-service.dns-3224.svc.cluster.local wheezy_tcp@dns-test-service.dns-3224.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local jessie_udp@dns-test-service.dns-3224.svc.cluster.local jessie_tcp@dns-test-service.dns-3224.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local] Jul 7 15:42:20.580: INFO: Unable to read wheezy_udp@dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:42:20.583: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:42:20.587: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:42:20.590: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:42:20.610: INFO: Unable to read jessie_udp@dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:42:20.613: INFO: Unable to read jessie_tcp@dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:42:20.615: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:42:20.618: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local from pod dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6: the server could not find the requested resource (get pods dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6) Jul 7 15:42:20.633: INFO: Lookups using dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6 failed for: [wheezy_udp@dns-test-service.dns-3224.svc.cluster.local wheezy_tcp@dns-test-service.dns-3224.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local jessie_udp@dns-test-service.dns-3224.svc.cluster.local jessie_tcp@dns-test-service.dns-3224.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3224.svc.cluster.local] Jul 7 15:42:26.570: INFO: DNS probes using dns-3224/dns-test-7442c4ff-a18c-4ecb-9f7f-518d2ffb88a6 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:42:27.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3224" for this suite. • [SLOW TEST:46.343 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":43,"skipped":745,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:42:27.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 7 15:42:27.989: INFO: Create a RollingUpdate DaemonSet Jul 7 15:42:27.992: INFO: Check that daemon pods launch on every node of the cluster Jul 7 15:42:28.007: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:42:28.069: INFO: Number of nodes with available pods: 0 Jul 7 15:42:28.069: INFO: Node jerma-worker is running more than one daemon pod Jul 7 15:42:29.075: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:42:29.078: INFO: Number of nodes with available pods: 0 Jul 7 15:42:29.078: INFO: Node jerma-worker is running more than one daemon pod Jul 7 15:42:30.581: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:42:30.649: INFO: Number of nodes with available pods: 0 Jul 7 15:42:30.649: INFO: Node jerma-worker is running more than one daemon pod Jul 7 15:42:31.138: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:42:31.142: INFO: Number of nodes with available pods: 0 Jul 7 15:42:31.142: INFO: Node jerma-worker is running more than one daemon pod Jul 7 15:42:32.774: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:42:33.164: INFO: Number of nodes with available pods: 0 Jul 7 15:42:33.164: INFO: Node jerma-worker is running more than one daemon pod Jul 7 15:42:34.314: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:42:34.695: INFO: Number of nodes with available pods: 0 Jul 7 15:42:34.695: INFO: Node jerma-worker is running more than one daemon pod Jul 7 15:42:35.156: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:42:35.230: INFO: Number of nodes with available pods: 0 Jul 7 15:42:35.230: INFO: Node jerma-worker is running more than one daemon pod Jul 7 15:42:36.446: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:42:36.522: INFO: Number of nodes with available pods: 2 Jul 7 15:42:36.522: INFO: Number of running nodes: 2, number of available pods: 2 Jul 7 15:42:36.522: INFO: Update the DaemonSet to trigger a rollout Jul 7 15:42:36.813: INFO: Updating DaemonSet daemon-set Jul 7 15:42:44.479: INFO: Roll back the DaemonSet before rollout is complete Jul 7 15:42:44.485: INFO: Updating DaemonSet daemon-set Jul 7 15:42:44.485: INFO: Make sure DaemonSet rollback is complete Jul 7 15:42:44.510: INFO: Wrong image for pod: daemon-set-phfct. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jul 7 15:42:44.510: INFO: Pod daemon-set-phfct is not available Jul 7 15:42:44.541: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:42:45.633: INFO: Wrong image for pod: daemon-set-phfct. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jul 7 15:42:45.633: INFO: Pod daemon-set-phfct is not available Jul 7 15:42:45.636: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:42:46.546: INFO: Wrong image for pod: daemon-set-phfct. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jul 7 15:42:46.546: INFO: Pod daemon-set-phfct is not available Jul 7 15:42:46.549: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 7 15:42:47.544: INFO: Pod daemon-set-47df5 is not available Jul 7 15:42:47.548: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1432, will wait for the garbage collector to delete the pods Jul 7 15:42:47.611: INFO: Deleting DaemonSet.extensions daemon-set took: 7.126219ms Jul 7 15:42:47.911: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.229713ms Jul 7 15:42:53.114: INFO: Number of nodes with available pods: 0 Jul 7 15:42:53.114: INFO: Number of running nodes: 0, number of available pods: 0 Jul 7 15:42:53.116: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1432/daemonsets","resourceVersion":"930517"},"items":null} Jul 7 15:42:53.118: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1432/pods","resourceVersion":"930517"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:42:53.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1432" for this suite. • [SLOW TEST:25.459 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":44,"skipped":756,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:42:53.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller Jul 7 15:42:54.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2383' Jul 7 15:42:56.791: INFO: stderr: "" Jul 7 15:42:56.791: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 7 15:42:56.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2383' Jul 7 15:42:57.614: INFO: stderr: "" Jul 7 15:42:57.614: INFO: stdout: "update-demo-nautilus-46wxn update-demo-nautilus-wn54n " Jul 7 15:42:57.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-46wxn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2383' Jul 7 15:42:58.123: INFO: stderr: "" Jul 7 15:42:58.124: INFO: stdout: "" Jul 7 15:42:58.124: INFO: update-demo-nautilus-46wxn is created but not running Jul 7 15:43:03.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2383' Jul 7 15:43:15.776: INFO: stderr: "" Jul 7 15:43:15.776: INFO: stdout: "update-demo-nautilus-46wxn update-demo-nautilus-wn54n " Jul 7 15:43:15.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-46wxn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2383' Jul 7 15:43:15.865: INFO: stderr: "" Jul 7 15:43:15.865: INFO: stdout: "true" Jul 7 15:43:15.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-46wxn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2383' Jul 7 15:43:15.951: INFO: stderr: "" Jul 7 15:43:15.951: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 7 15:43:15.951: INFO: validating pod update-demo-nautilus-46wxn Jul 7 15:43:15.955: INFO: got data: { "image": "nautilus.jpg" } Jul 7 15:43:15.955: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 7 15:43:15.955: INFO: update-demo-nautilus-46wxn is verified up and running Jul 7 15:43:15.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wn54n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2383' Jul 7 15:43:16.085: INFO: stderr: "" Jul 7 15:43:16.086: INFO: stdout: "true" Jul 7 15:43:16.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wn54n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2383' Jul 7 15:43:16.175: INFO: stderr: "" Jul 7 15:43:16.175: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 7 15:43:16.175: INFO: validating pod update-demo-nautilus-wn54n Jul 7 15:43:16.210: INFO: got data: { "image": "nautilus.jpg" } Jul 7 15:43:16.210: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 7 15:43:16.210: INFO: update-demo-nautilus-wn54n is verified up and running STEP: rolling-update to new replication controller Jul 7 15:43:16.212: INFO: scanned /root for discovery docs: Jul 7 15:43:16.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-2383' Jul 7 15:43:43.012: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jul 7 15:43:43.012: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 7 15:43:43.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2383' Jul 7 15:43:43.140: INFO: stderr: "" Jul 7 15:43:43.140: INFO: stdout: "update-demo-kitten-bwsrt update-demo-kitten-qqs5s " Jul 7 15:43:43.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-bwsrt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2383' Jul 7 15:43:43.230: INFO: stderr: "" Jul 7 15:43:43.230: INFO: stdout: "true" Jul 7 15:43:43.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-bwsrt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2383' Jul 7 15:43:43.318: INFO: stderr: "" Jul 7 15:43:43.319: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jul 7 15:43:43.319: INFO: validating pod update-demo-kitten-bwsrt Jul 7 15:43:43.321: INFO: got data: { "image": "kitten.jpg" } Jul 7 15:43:43.321: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jul 7 15:43:43.321: INFO: update-demo-kitten-bwsrt is verified up and running Jul 7 15:43:43.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-qqs5s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2383' Jul 7 15:43:43.419: INFO: stderr: "" Jul 7 15:43:43.419: INFO: stdout: "true" Jul 7 15:43:43.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-qqs5s -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2383' Jul 7 15:43:43.510: INFO: stderr: "" Jul 7 15:43:43.510: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jul 7 15:43:43.510: INFO: validating pod update-demo-kitten-qqs5s Jul 7 15:43:43.565: INFO: got data: { "image": "kitten.jpg" } Jul 7 15:43:43.565: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jul 7 15:43:43.565: INFO: update-demo-kitten-qqs5s is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:43:43.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2383" for this suite. • [SLOW TEST:50.424 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":45,"skipped":762,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:43:43.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-1e3e2dcd-ef90-416d-843b-0ab179acb1f9 STEP: Creating secret with name s-test-opt-upd-1b358ae7-5997-4eab-8b7d-468b31257aed STEP: Creating the pod STEP: Deleting secret s-test-opt-del-1e3e2dcd-ef90-416d-843b-0ab179acb1f9 STEP: Updating secret s-test-opt-upd-1b358ae7-5997-4eab-8b7d-468b31257aed STEP: Creating secret with name s-test-opt-create-dfa4a698-b231-4c41-902c-efb372baa3cd STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:45:06.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1943" for this suite. • [SLOW TEST:82.772 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":46,"skipped":776,"failed":0} SSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:45:06.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jul 7 15:45:16.253: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:45:17.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7740" for this suite. • [SLOW TEST:11.040 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":47,"skipped":780,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:45:17.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 7 15:45:21.315: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 7 15:45:23.910: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733521, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733521, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733521, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733520, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 7 15:45:26.260: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733521, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733521, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733521, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733520, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 7 15:45:27.914: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733521, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733521, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733521, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733520, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 7 15:45:30.979: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:45:44.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9471" for this suite. STEP: Destroying namespace "webhook-9471-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:27.838 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":48,"skipped":788,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:45:45.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Jul 7 15:45:45.519: INFO: >>> kubeConfig: /root/.kube/config Jul 7 15:45:47.992: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:45:59.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1812" for this suite. • [SLOW TEST:13.824 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":49,"skipped":796,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:45:59.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-92dc0a93-91de-4312-b345-7e2d6c41f6c9 STEP: Creating a pod to test consume secrets Jul 7 15:45:59.207: INFO: Waiting up to 5m0s for pod "pod-secrets-7320391a-8b6a-4ef2-8089-bd64df55cec3" in namespace "secrets-4960" to be "success or failure" Jul 7 15:45:59.210: INFO: Pod "pod-secrets-7320391a-8b6a-4ef2-8089-bd64df55cec3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.251161ms Jul 7 15:46:01.565: INFO: Pod "pod-secrets-7320391a-8b6a-4ef2-8089-bd64df55cec3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.35813421s Jul 7 15:46:03.619: INFO: Pod "pod-secrets-7320391a-8b6a-4ef2-8089-bd64df55cec3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.411899434s Jul 7 15:46:05.622: INFO: Pod "pod-secrets-7320391a-8b6a-4ef2-8089-bd64df55cec3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.415553749s STEP: Saw pod success Jul 7 15:46:05.623: INFO: Pod "pod-secrets-7320391a-8b6a-4ef2-8089-bd64df55cec3" satisfied condition "success or failure" Jul 7 15:46:05.625: INFO: Trying to get logs from node jerma-worker pod pod-secrets-7320391a-8b6a-4ef2-8089-bd64df55cec3 container secret-volume-test: STEP: delete the pod Jul 7 15:46:05.643: INFO: Waiting for pod pod-secrets-7320391a-8b6a-4ef2-8089-bd64df55cec3 to disappear Jul 7 15:46:05.672: INFO: Pod pod-secrets-7320391a-8b6a-4ef2-8089-bd64df55cec3 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:46:05.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4960" for this suite. • [SLOW TEST:6.628 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":50,"skipped":807,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:46:05.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Jul 7 15:46:05.829: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:46:21.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5495" for this suite. • [SLOW TEST:16.636 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":51,"skipped":831,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:46:22.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 7 15:46:22.824: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c802c990-e288-4428-91ba-65e608720afe" in namespace "downward-api-1871" to be "success or failure" Jul 7 15:46:22.895: INFO: Pod "downwardapi-volume-c802c990-e288-4428-91ba-65e608720afe": Phase="Pending", Reason="", readiness=false. Elapsed: 70.370553ms Jul 7 15:46:25.531: INFO: Pod "downwardapi-volume-c802c990-e288-4428-91ba-65e608720afe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.706815907s Jul 7 15:46:27.637: INFO: Pod "downwardapi-volume-c802c990-e288-4428-91ba-65e608720afe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.813009507s Jul 7 15:46:29.703: INFO: Pod "downwardapi-volume-c802c990-e288-4428-91ba-65e608720afe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.879141944s STEP: Saw pod success Jul 7 15:46:29.703: INFO: Pod "downwardapi-volume-c802c990-e288-4428-91ba-65e608720afe" satisfied condition "success or failure" Jul 7 15:46:29.706: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-c802c990-e288-4428-91ba-65e608720afe container client-container: STEP: delete the pod Jul 7 15:46:29.759: INFO: Waiting for pod downwardapi-volume-c802c990-e288-4428-91ba-65e608720afe to disappear Jul 7 15:46:29.792: INFO: Pod downwardapi-volume-c802c990-e288-4428-91ba-65e608720afe no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:46:29.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1871" for this suite. • [SLOW TEST:7.582 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":52,"skipped":833,"failed":0} S ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:46:29.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-3455 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-3455 STEP: creating replication controller externalsvc in namespace services-3455 I0707 15:46:30.560257 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-3455, replica count: 2 I0707 15:46:33.610716 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0707 15:46:36.610978 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0707 15:46:39.611203 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Jul 7 15:46:39.823: INFO: Creating new exec pod Jul 7 15:46:45.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3455 execpodp9tw8 -- /bin/sh -x -c nslookup clusterip-service' Jul 7 15:46:46.217: INFO: stderr: "I0707 15:46:46.099280 619 log.go:172] (0xc000a39550) (0xc000a48780) Create stream\nI0707 15:46:46.099348 619 log.go:172] (0xc000a39550) (0xc000a48780) Stream added, broadcasting: 1\nI0707 15:46:46.110144 619 log.go:172] (0xc000a39550) Reply frame received for 1\nI0707 15:46:46.110201 619 log.go:172] (0xc000a39550) (0xc00060e780) Create stream\nI0707 15:46:46.110212 619 log.go:172] (0xc000a39550) (0xc00060e780) Stream added, broadcasting: 3\nI0707 15:46:46.111616 619 log.go:172] (0xc000a39550) Reply frame received for 3\nI0707 15:46:46.111664 619 log.go:172] (0xc000a39550) (0xc000425540) Create stream\nI0707 15:46:46.111677 619 log.go:172] (0xc000a39550) (0xc000425540) Stream added, broadcasting: 5\nI0707 15:46:46.112735 619 log.go:172] (0xc000a39550) Reply frame received for 5\nI0707 15:46:46.202149 619 log.go:172] (0xc000a39550) Data frame received for 5\nI0707 15:46:46.202175 619 log.go:172] (0xc000425540) (5) Data frame handling\nI0707 15:46:46.202196 619 log.go:172] (0xc000425540) (5) Data frame sent\n+ nslookup clusterip-service\nI0707 15:46:46.208602 619 log.go:172] (0xc000a39550) Data frame received for 3\nI0707 15:46:46.208622 619 log.go:172] (0xc00060e780) (3) Data frame handling\nI0707 15:46:46.208636 619 log.go:172] (0xc00060e780) (3) Data frame sent\nI0707 15:46:46.209668 619 log.go:172] (0xc000a39550) Data frame received for 3\nI0707 15:46:46.209689 619 log.go:172] (0xc00060e780) (3) Data frame handling\nI0707 15:46:46.209706 619 log.go:172] (0xc00060e780) (3) Data frame sent\nI0707 15:46:46.210163 619 log.go:172] (0xc000a39550) Data frame received for 3\nI0707 15:46:46.210187 619 log.go:172] (0xc00060e780) (3) Data frame handling\nI0707 15:46:46.210211 619 log.go:172] (0xc000a39550) Data frame received for 5\nI0707 15:46:46.210223 619 log.go:172] (0xc000425540) (5) Data frame handling\nI0707 15:46:46.211942 619 log.go:172] (0xc000a39550) Data frame received for 1\nI0707 15:46:46.211958 619 log.go:172] (0xc000a48780) (1) Data frame handling\nI0707 15:46:46.211970 619 log.go:172] (0xc000a48780) (1) Data frame sent\nI0707 15:46:46.211982 619 log.go:172] (0xc000a39550) (0xc000a48780) Stream removed, broadcasting: 1\nI0707 15:46:46.212129 619 log.go:172] (0xc000a39550) Go away received\nI0707 15:46:46.212584 619 log.go:172] (0xc000a39550) (0xc000a48780) Stream removed, broadcasting: 1\nI0707 15:46:46.212604 619 log.go:172] (0xc000a39550) (0xc00060e780) Stream removed, broadcasting: 3\nI0707 15:46:46.212614 619 log.go:172] (0xc000a39550) (0xc000425540) Stream removed, broadcasting: 5\n" Jul 7 15:46:46.218: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-3455.svc.cluster.local\tcanonical name = externalsvc.services-3455.svc.cluster.local.\nName:\texternalsvc.services-3455.svc.cluster.local\nAddress: 10.110.26.42\n\n" STEP: deleting ReplicationController externalsvc in namespace services-3455, will wait for the garbage collector to delete the pods Jul 7 15:46:46.299: INFO: Deleting ReplicationController externalsvc took: 27.536057ms Jul 7 15:46:46.599: INFO: Terminating ReplicationController externalsvc pods took: 300.300736ms Jul 7 15:46:57.926: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:46:57.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3455" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:28.054 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":53,"skipped":834,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:46:57.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 7 15:46:58.232: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:47:02.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6814" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":845,"failed":0} SSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:47:02.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command Jul 7 15:47:02.530: INFO: Waiting up to 5m0s for pod "client-containers-e213750b-64e1-4a12-a485-15c5853ed06c" in namespace "containers-2536" to be "success or failure" Jul 7 15:47:02.548: INFO: Pod "client-containers-e213750b-64e1-4a12-a485-15c5853ed06c": Phase="Pending", Reason="", readiness=false. Elapsed: 18.49024ms Jul 7 15:47:04.669: INFO: Pod "client-containers-e213750b-64e1-4a12-a485-15c5853ed06c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139413522s Jul 7 15:47:07.149: INFO: Pod "client-containers-e213750b-64e1-4a12-a485-15c5853ed06c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.619716427s Jul 7 15:47:09.304: INFO: Pod "client-containers-e213750b-64e1-4a12-a485-15c5853ed06c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.774645458s Jul 7 15:47:11.394: INFO: Pod "client-containers-e213750b-64e1-4a12-a485-15c5853ed06c": Phase="Running", Reason="", readiness=true. Elapsed: 8.864008781s Jul 7 15:47:13.397: INFO: Pod "client-containers-e213750b-64e1-4a12-a485-15c5853ed06c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.867424168s STEP: Saw pod success Jul 7 15:47:13.397: INFO: Pod "client-containers-e213750b-64e1-4a12-a485-15c5853ed06c" satisfied condition "success or failure" Jul 7 15:47:13.399: INFO: Trying to get logs from node jerma-worker2 pod client-containers-e213750b-64e1-4a12-a485-15c5853ed06c container test-container: STEP: delete the pod Jul 7 15:47:13.515: INFO: Waiting for pod client-containers-e213750b-64e1-4a12-a485-15c5853ed06c to disappear Jul 7 15:47:13.537: INFO: Pod client-containers-e213750b-64e1-4a12-a485-15c5853ed06c no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:47:13.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2536" for this suite. • [SLOW TEST:11.139 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":55,"skipped":849,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:47:13.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jul 7 15:47:16.079: INFO: Pod name wrapped-volume-race-bda1d761-de1b-4e78-9296-1eb7ee13eb57: Found 0 pods out of 5 Jul 7 15:47:21.084: INFO: Pod name wrapped-volume-race-bda1d761-de1b-4e78-9296-1eb7ee13eb57: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-bda1d761-de1b-4e78-9296-1eb7ee13eb57 in namespace emptydir-wrapper-7597, will wait for the garbage collector to delete the pods Jul 7 15:47:37.258: INFO: Deleting ReplicationController wrapped-volume-race-bda1d761-de1b-4e78-9296-1eb7ee13eb57 took: 103.681058ms Jul 7 15:47:37.658: INFO: Terminating ReplicationController wrapped-volume-race-bda1d761-de1b-4e78-9296-1eb7ee13eb57 pods took: 400.221161ms STEP: Creating RC which spawns configmap-volume pods Jul 7 15:47:47.458: INFO: Pod name wrapped-volume-race-db793268-e0c3-4722-a471-92a32f8cd04f: Found 0 pods out of 5 Jul 7 15:47:52.495: INFO: Pod name wrapped-volume-race-db793268-e0c3-4722-a471-92a32f8cd04f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-db793268-e0c3-4722-a471-92a32f8cd04f in namespace emptydir-wrapper-7597, will wait for the garbage collector to delete the pods Jul 7 15:48:14.875: INFO: Deleting ReplicationController wrapped-volume-race-db793268-e0c3-4722-a471-92a32f8cd04f took: 5.814006ms Jul 7 15:48:15.275: INFO: Terminating ReplicationController wrapped-volume-race-db793268-e0c3-4722-a471-92a32f8cd04f pods took: 400.23649ms STEP: Creating RC which spawns configmap-volume pods Jul 7 15:48:36.344: INFO: Pod name wrapped-volume-race-aae30f2a-35d0-4559-a51f-a1a8b4bb2de9: Found 0 pods out of 5 Jul 7 15:48:41.396: INFO: Pod name wrapped-volume-race-aae30f2a-35d0-4559-a51f-a1a8b4bb2de9: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-aae30f2a-35d0-4559-a51f-a1a8b4bb2de9 in namespace emptydir-wrapper-7597, will wait for the garbage collector to delete the pods Jul 7 15:48:59.530: INFO: Deleting ReplicationController wrapped-volume-race-aae30f2a-35d0-4559-a51f-a1a8b4bb2de9 took: 62.946046ms Jul 7 15:49:00.031: INFO: Terminating ReplicationController wrapped-volume-race-aae30f2a-35d0-4559-a51f-a1a8b4bb2de9 pods took: 500.294165ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:49:22.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7597" for this suite. • [SLOW TEST:129.291 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":56,"skipped":882,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:49:22.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 STEP: creating the pod Jul 7 15:49:23.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3508' Jul 7 15:49:24.036: INFO: stderr: "" Jul 7 15:49:24.036: INFO: stdout: "pod/pause created\n" Jul 7 15:49:24.036: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jul 7 15:49:24.036: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-3508" to be "running and ready" Jul 7 15:49:24.172: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 136.469422ms Jul 7 15:49:26.606: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.569684251s Jul 7 15:49:28.790: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.754032876s Jul 7 15:49:31.246: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 7.210573871s Jul 7 15:49:31.247: INFO: Pod "pause" satisfied condition "running and ready" Jul 7 15:49:31.247: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod Jul 7 15:49:31.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-3508' Jul 7 15:49:31.515: INFO: stderr: "" Jul 7 15:49:31.515: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jul 7 15:49:31.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3508' Jul 7 15:49:31.661: INFO: stderr: "" Jul 7 15:49:31.661: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 7s testing-label-value\n" STEP: removing the label testing-label of a pod Jul 7 15:49:31.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-3508' Jul 7 15:49:31.815: INFO: stderr: "" Jul 7 15:49:31.815: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jul 7 15:49:31.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3508' Jul 7 15:49:31.935: INFO: stderr: "" Jul 7 15:49:31.935: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 7s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1282 STEP: using delete to clean up resources Jul 7 15:49:31.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3508' Jul 7 15:49:32.623: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 7 15:49:32.623: INFO: stdout: "pod \"pause\" force deleted\n" Jul 7 15:49:32.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-3508' Jul 7 15:49:32.965: INFO: stderr: "No resources found in kubectl-3508 namespace.\n" Jul 7 15:49:32.965: INFO: stdout: "" Jul 7 15:49:32.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-3508 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 7 15:49:33.186: INFO: stderr: "" Jul 7 15:49:33.186: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:49:33.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3508" for this suite. • [SLOW TEST:10.626 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1272 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":57,"skipped":914,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:49:33.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jul 7 15:49:35.752: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jul 7 15:49:38.271: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733775, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733775, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733776, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733775, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 7 15:49:40.602: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733775, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733775, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733776, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733775, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 7 15:49:42.278: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733775, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733775, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733776, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733775, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 7 15:49:44.631: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733775, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733775, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733776, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729733775, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 7 15:49:47.510: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 7 15:49:47.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:49:48.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-9418" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:15.979 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":58,"skipped":948,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:49:49.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:49:56.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3868" for this suite. STEP: Destroying namespace "nsdeletetest-2253" for this suite. Jul 7 15:49:57.002: INFO: Namespace nsdeletetest-2253 was already deleted STEP: Destroying namespace "nsdeletetest-8052" for this suite. • [SLOW TEST:7.559 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":59,"skipped":956,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:49:57.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args Jul 7 15:49:57.975: INFO: Waiting up to 5m0s for pod "var-expansion-8b22884a-e14e-4c37-8db8-8e7336935625" in namespace "var-expansion-3351" to be "success or failure" Jul 7 15:49:58.015: INFO: Pod "var-expansion-8b22884a-e14e-4c37-8db8-8e7336935625": Phase="Pending", Reason="", readiness=false. Elapsed: 39.908436ms Jul 7 15:50:00.024: INFO: Pod "var-expansion-8b22884a-e14e-4c37-8db8-8e7336935625": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049061704s Jul 7 15:50:02.026: INFO: Pod "var-expansion-8b22884a-e14e-4c37-8db8-8e7336935625": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051513573s Jul 7 15:50:04.030: INFO: Pod "var-expansion-8b22884a-e14e-4c37-8db8-8e7336935625": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.055537707s STEP: Saw pod success Jul 7 15:50:04.031: INFO: Pod "var-expansion-8b22884a-e14e-4c37-8db8-8e7336935625" satisfied condition "success or failure" Jul 7 15:50:04.034: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-8b22884a-e14e-4c37-8db8-8e7336935625 container dapi-container: STEP: delete the pod Jul 7 15:50:04.076: INFO: Waiting for pod var-expansion-8b22884a-e14e-4c37-8db8-8e7336935625 to disappear Jul 7 15:50:04.093: INFO: Pod var-expansion-8b22884a-e14e-4c37-8db8-8e7336935625 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:50:04.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3351" for this suite. • [SLOW TEST:7.222 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":60,"skipped":964,"failed":0} [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:50:04.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-2e6b3999-a920-4fec-9730-230fbc7c562f Jul 7 15:50:04.644: INFO: Pod name my-hostname-basic-2e6b3999-a920-4fec-9730-230fbc7c562f: Found 0 pods out of 1 Jul 7 15:50:09.711: INFO: Pod name my-hostname-basic-2e6b3999-a920-4fec-9730-230fbc7c562f: Found 1 pods out of 1 Jul 7 15:50:09.711: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-2e6b3999-a920-4fec-9730-230fbc7c562f" are running Jul 7 15:50:11.836: INFO: Pod "my-hostname-basic-2e6b3999-a920-4fec-9730-230fbc7c562f-pchpx" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-07 15:50:04 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-07 15:50:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-2e6b3999-a920-4fec-9730-230fbc7c562f]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-07 15:50:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-2e6b3999-a920-4fec-9730-230fbc7c562f]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-07 15:50:04 +0000 UTC Reason: Message:}]) Jul 7 15:50:11.836: INFO: Trying to dial the pod Jul 7 15:50:16.848: INFO: Controller my-hostname-basic-2e6b3999-a920-4fec-9730-230fbc7c562f: Got expected result from replica 1 [my-hostname-basic-2e6b3999-a920-4fec-9730-230fbc7c562f-pchpx]: "my-hostname-basic-2e6b3999-a920-4fec-9730-230fbc7c562f-pchpx", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:50:16.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8646" for this suite. • [SLOW TEST:12.629 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":61,"skipped":964,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:50:16.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-1c3cb8c1-8b20-4ea6-8c6b-da7ebb3952cf STEP: Creating the pod STEP: Updating configmap configmap-test-upd-1c3cb8c1-8b20-4ea6-8c6b-da7ebb3952cf STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:51:26.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1104" for this suite. • [SLOW TEST:69.753 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":977,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:51:26.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:51:59.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3413" for this suite. • [SLOW TEST:33.064 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":63,"skipped":1030,"failed":0} SSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:51:59.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0707 15:52:43.373652 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 7 15:52:43.373: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:52:43.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2214" for this suite. • [SLOW TEST:43.706 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":64,"skipped":1034,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:52:43.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 7 15:52:43.464: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jul 7 15:52:45.860: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:52:47.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-184" for this suite. • [SLOW TEST:5.069 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":65,"skipped":1060,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:52:48.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:53:03.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-36" for this suite. • [SLOW TEST:14.951 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":66,"skipped":1073,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:53:03.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-55828202-c18e-4453-b7a5-6da0635bae4c STEP: Creating a pod to test consume secrets Jul 7 15:53:04.868: INFO: Waiting up to 5m0s for pod "pod-secrets-bac45811-d515-496a-a2b5-9129d737e8f4" in namespace "secrets-46" to be "success or failure" Jul 7 15:53:05.227: INFO: Pod "pod-secrets-bac45811-d515-496a-a2b5-9129d737e8f4": Phase="Pending", Reason="", readiness=false. Elapsed: 358.590674ms Jul 7 15:53:07.504: INFO: Pod "pod-secrets-bac45811-d515-496a-a2b5-9129d737e8f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.635455621s Jul 7 15:53:09.518: INFO: Pod "pod-secrets-bac45811-d515-496a-a2b5-9129d737e8f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.649721218s Jul 7 15:53:11.554: INFO: Pod "pod-secrets-bac45811-d515-496a-a2b5-9129d737e8f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.685609892s STEP: Saw pod success Jul 7 15:53:11.554: INFO: Pod "pod-secrets-bac45811-d515-496a-a2b5-9129d737e8f4" satisfied condition "success or failure" Jul 7 15:53:11.557: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-bac45811-d515-496a-a2b5-9129d737e8f4 container secret-volume-test: STEP: delete the pod Jul 7 15:53:12.048: INFO: Waiting for pod pod-secrets-bac45811-d515-496a-a2b5-9129d737e8f4 to disappear Jul 7 15:53:13.141: INFO: Pod pod-secrets-bac45811-d515-496a-a2b5-9129d737e8f4 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:53:13.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-46" for this suite. • [SLOW TEST:10.583 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":67,"skipped":1100,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:53:13.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 7 15:53:15.019: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3716871d-2356-44bf-ae03-4c4b344e8e44" in namespace "downward-api-6202" to be "success or failure" Jul 7 15:53:15.549: INFO: Pod "downwardapi-volume-3716871d-2356-44bf-ae03-4c4b344e8e44": Phase="Pending", Reason="", readiness=false. Elapsed: 530.45551ms Jul 7 15:53:17.554: INFO: Pod "downwardapi-volume-3716871d-2356-44bf-ae03-4c4b344e8e44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.535394167s Jul 7 15:53:19.805: INFO: Pod "downwardapi-volume-3716871d-2356-44bf-ae03-4c4b344e8e44": Phase="Pending", Reason="", readiness=false. Elapsed: 4.786684208s Jul 7 15:53:21.914: INFO: Pod "downwardapi-volume-3716871d-2356-44bf-ae03-4c4b344e8e44": Phase="Running", Reason="", readiness=true. Elapsed: 6.895543496s Jul 7 15:53:23.918: INFO: Pod "downwardapi-volume-3716871d-2356-44bf-ae03-4c4b344e8e44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.899801858s STEP: Saw pod success Jul 7 15:53:23.918: INFO: Pod "downwardapi-volume-3716871d-2356-44bf-ae03-4c4b344e8e44" satisfied condition "success or failure" Jul 7 15:53:23.921: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-3716871d-2356-44bf-ae03-4c4b344e8e44 container client-container: STEP: delete the pod Jul 7 15:53:24.057: INFO: Waiting for pod downwardapi-volume-3716871d-2356-44bf-ae03-4c4b344e8e44 to disappear Jul 7 15:53:24.059: INFO: Pod downwardapi-volume-3716871d-2356-44bf-ae03-4c4b344e8e44 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:53:24.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6202" for this suite. • [SLOW TEST:10.079 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":1124,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:53:24.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 7 15:53:24.292: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:53:30.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3671" for this suite. • [SLOW TEST:6.787 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":69,"skipped":1125,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:53:30.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Jul 7 15:53:31.095: INFO: Waiting up to 5m0s for pod "pod-d68de15f-bb72-4cb7-99d4-4683f1fbe589" in namespace "emptydir-8830" to be "success or failure" Jul 7 15:53:31.145: INFO: Pod "pod-d68de15f-bb72-4cb7-99d4-4683f1fbe589": Phase="Pending", Reason="", readiness=false. Elapsed: 50.073094ms Jul 7 15:53:33.151: INFO: Pod "pod-d68de15f-bb72-4cb7-99d4-4683f1fbe589": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055851452s Jul 7 15:53:35.154: INFO: Pod "pod-d68de15f-bb72-4cb7-99d4-4683f1fbe589": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059096713s Jul 7 15:53:37.568: INFO: Pod "pod-d68de15f-bb72-4cb7-99d4-4683f1fbe589": Phase="Pending", Reason="", readiness=false. Elapsed: 6.472763716s Jul 7 15:53:39.571: INFO: Pod "pod-d68de15f-bb72-4cb7-99d4-4683f1fbe589": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.476400438s STEP: Saw pod success Jul 7 15:53:39.572: INFO: Pod "pod-d68de15f-bb72-4cb7-99d4-4683f1fbe589" satisfied condition "success or failure" Jul 7 15:53:39.574: INFO: Trying to get logs from node jerma-worker pod pod-d68de15f-bb72-4cb7-99d4-4683f1fbe589 container test-container: STEP: delete the pod Jul 7 15:53:39.795: INFO: Waiting for pod pod-d68de15f-bb72-4cb7-99d4-4683f1fbe589 to disappear Jul 7 15:53:39.884: INFO: Pod pod-d68de15f-bb72-4cb7-99d4-4683f1fbe589 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:53:39.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8830" for this suite. • [SLOW TEST:9.647 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":70,"skipped":1133,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:53:40.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0707 15:53:56.809638 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 7 15:53:56.809: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:53:56.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3000" for this suite. • [SLOW TEST:16.681 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":71,"skipped":1134,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:53:57.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Jul 7 15:53:59.594: INFO: Waiting up to 5m0s for pod "pod-c8c91f0d-8b95-4ffb-b8d9-c3fe7141c047" in namespace "emptydir-7319" to be "success or failure" Jul 7 15:53:59.891: INFO: Pod "pod-c8c91f0d-8b95-4ffb-b8d9-c3fe7141c047": Phase="Pending", Reason="", readiness=false. Elapsed: 297.077095ms Jul 7 15:54:01.896: INFO: Pod "pod-c8c91f0d-8b95-4ffb-b8d9-c3fe7141c047": Phase="Pending", Reason="", readiness=false. Elapsed: 2.302356885s Jul 7 15:54:03.973: INFO: Pod "pod-c8c91f0d-8b95-4ffb-b8d9-c3fe7141c047": Phase="Pending", Reason="", readiness=false. Elapsed: 4.379233859s Jul 7 15:54:07.693: INFO: Pod "pod-c8c91f0d-8b95-4ffb-b8d9-c3fe7141c047": Phase="Running", Reason="", readiness=true. Elapsed: 8.099313531s Jul 7 15:54:09.954: INFO: Pod "pod-c8c91f0d-8b95-4ffb-b8d9-c3fe7141c047": Phase="Running", Reason="", readiness=true. Elapsed: 10.359928881s Jul 7 15:54:11.960: INFO: Pod "pod-c8c91f0d-8b95-4ffb-b8d9-c3fe7141c047": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.366182528s STEP: Saw pod success Jul 7 15:54:11.960: INFO: Pod "pod-c8c91f0d-8b95-4ffb-b8d9-c3fe7141c047" satisfied condition "success or failure" Jul 7 15:54:12.027: INFO: Trying to get logs from node jerma-worker2 pod pod-c8c91f0d-8b95-4ffb-b8d9-c3fe7141c047 container test-container: STEP: delete the pod Jul 7 15:54:12.944: INFO: Waiting for pod pod-c8c91f0d-8b95-4ffb-b8d9-c3fe7141c047 to disappear Jul 7 15:54:12.973: INFO: Pod pod-c8c91f0d-8b95-4ffb-b8d9-c3fe7141c047 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:54:12.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7319" for this suite. • [SLOW TEST:16.143 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1156,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:54:13.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-7217 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-7217 STEP: creating replication controller externalsvc in namespace services-7217 I0707 15:54:14.372614 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-7217, replica count: 2 I0707 15:54:17.423064 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0707 15:54:20.423279 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0707 15:54:23.423510 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0707 15:54:26.423784 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0707 15:54:29.423982 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Jul 7 15:54:30.163: INFO: Creating new exec pod Jul 7 15:54:39.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7217 execpodg5f78 -- /bin/sh -x -c nslookup nodeport-service' Jul 7 15:54:49.628: INFO: stderr: "I0707 15:54:49.529263 794 log.go:172] (0xc0001058c0) (0xc000864000) Create stream\nI0707 15:54:49.529298 794 log.go:172] (0xc0001058c0) (0xc000864000) Stream added, broadcasting: 1\nI0707 15:54:49.531630 794 log.go:172] (0xc0001058c0) Reply frame received for 1\nI0707 15:54:49.531668 794 log.go:172] (0xc0001058c0) (0xc000848000) Create stream\nI0707 15:54:49.531679 794 log.go:172] (0xc0001058c0) (0xc000848000) Stream added, broadcasting: 3\nI0707 15:54:49.532511 794 log.go:172] (0xc0001058c0) Reply frame received for 3\nI0707 15:54:49.532536 794 log.go:172] (0xc0001058c0) (0xc000864140) Create stream\nI0707 15:54:49.532543 794 log.go:172] (0xc0001058c0) (0xc000864140) Stream added, broadcasting: 5\nI0707 15:54:49.533785 794 log.go:172] (0xc0001058c0) Reply frame received for 5\nI0707 15:54:49.606986 794 log.go:172] (0xc0001058c0) Data frame received for 5\nI0707 15:54:49.607011 794 log.go:172] (0xc000864140) (5) Data frame handling\nI0707 15:54:49.607026 794 log.go:172] (0xc000864140) (5) Data frame sent\n+ nslookup nodeport-service\nI0707 15:54:49.618246 794 log.go:172] (0xc0001058c0) Data frame received for 3\nI0707 15:54:49.618277 794 log.go:172] (0xc000848000) (3) Data frame handling\nI0707 15:54:49.618294 794 log.go:172] (0xc000848000) (3) Data frame sent\nI0707 15:54:49.619514 794 log.go:172] (0xc0001058c0) Data frame received for 3\nI0707 15:54:49.619539 794 log.go:172] (0xc000848000) (3) Data frame handling\nI0707 15:54:49.619573 794 log.go:172] (0xc000848000) (3) Data frame sent\nI0707 15:54:49.620169 794 log.go:172] (0xc0001058c0) Data frame received for 3\nI0707 15:54:49.620187 794 log.go:172] (0xc000848000) (3) Data frame handling\nI0707 15:54:49.620207 794 log.go:172] (0xc0001058c0) Data frame received for 5\nI0707 15:54:49.620250 794 log.go:172] (0xc000864140) (5) Data frame handling\nI0707 15:54:49.622203 794 log.go:172] (0xc0001058c0) Data frame received for 1\nI0707 15:54:49.622227 794 log.go:172] (0xc000864000) (1) Data frame handling\nI0707 15:54:49.622249 794 log.go:172] (0xc000864000) (1) Data frame sent\nI0707 15:54:49.622264 794 log.go:172] (0xc0001058c0) (0xc000864000) Stream removed, broadcasting: 1\nI0707 15:54:49.622281 794 log.go:172] (0xc0001058c0) Go away received\nI0707 15:54:49.622694 794 log.go:172] (0xc0001058c0) (0xc000864000) Stream removed, broadcasting: 1\nI0707 15:54:49.622716 794 log.go:172] (0xc0001058c0) (0xc000848000) Stream removed, broadcasting: 3\nI0707 15:54:49.622728 794 log.go:172] (0xc0001058c0) (0xc000864140) Stream removed, broadcasting: 5\n" Jul 7 15:54:49.628: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-7217.svc.cluster.local\tcanonical name = externalsvc.services-7217.svc.cluster.local.\nName:\texternalsvc.services-7217.svc.cluster.local\nAddress: 10.106.213.65\n\n" STEP: deleting ReplicationController externalsvc in namespace services-7217, will wait for the garbage collector to delete the pods Jul 7 15:54:49.737: INFO: Deleting ReplicationController externalsvc took: 6.362331ms Jul 7 15:54:50.137: INFO: Terminating ReplicationController externalsvc pods took: 400.277576ms Jul 7 15:54:56.878: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:54:56.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7217" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:43.607 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":73,"skipped":1173,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:54:56.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Jul 7 15:54:57.292: INFO: Waiting up to 5m0s for pod "pod-aa550462-4eee-4c9d-9bb3-2f952c6000da" in namespace "emptydir-1262" to be "success or failure" Jul 7 15:54:57.323: INFO: Pod "pod-aa550462-4eee-4c9d-9bb3-2f952c6000da": Phase="Pending", Reason="", readiness=false. Elapsed: 30.326762ms Jul 7 15:54:59.470: INFO: Pod "pod-aa550462-4eee-4c9d-9bb3-2f952c6000da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.177414147s Jul 7 15:55:01.473: INFO: Pod "pod-aa550462-4eee-4c9d-9bb3-2f952c6000da": Phase="Pending", Reason="", readiness=false. Elapsed: 4.180265054s Jul 7 15:55:03.597: INFO: Pod "pod-aa550462-4eee-4c9d-9bb3-2f952c6000da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.3046788s STEP: Saw pod success Jul 7 15:55:03.597: INFO: Pod "pod-aa550462-4eee-4c9d-9bb3-2f952c6000da" satisfied condition "success or failure" Jul 7 15:55:03.624: INFO: Trying to get logs from node jerma-worker2 pod pod-aa550462-4eee-4c9d-9bb3-2f952c6000da container test-container: STEP: delete the pod Jul 7 15:55:04.096: INFO: Waiting for pod pod-aa550462-4eee-4c9d-9bb3-2f952c6000da to disappear Jul 7 15:55:04.298: INFO: Pod pod-aa550462-4eee-4c9d-9bb3-2f952c6000da no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:55:04.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1262" for this suite. • [SLOW TEST:7.522 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1179,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:55:04.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 7 15:55:05.288: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:55:13.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7450" for this suite. • [SLOW TEST:9.200 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":75,"skipped":1187,"failed":0} SSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:55:13.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jul 7 15:55:18.201: INFO: &Pod{ObjectMeta:{send-events-d9b56d2b-b93b-4757-8919-979dae3580c8 events-6702 /api/v1/namespaces/events-6702/pods/send-events-d9b56d2b-b93b-4757-8919-979dae3580c8 3fa188a4-614e-4a06-afd6-97a649896f59 934840 0 2020-07-07 15:55:14 +0000 UTC map[name:foo time:948759316] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7jl4d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7jl4d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7jl4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 15:55:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 15:55:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 15:55:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 15:55:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.250,StartTime:2020-07-07 15:55:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-07 15:55:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://9f445de51dff340557d9d874aba8a90bbc08f47d9da3b61bda510e92a6d5d7de,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.250,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Jul 7 15:55:20.205: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jul 7 15:55:22.210: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:55:22.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6702" for this suite. • [SLOW TEST:8.677 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":76,"skipped":1190,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:55:22.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 7 15:55:22.450: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8ac25d72-5005-4766-ab04-109658fcc4c4" in namespace "projected-1669" to be "success or failure" Jul 7 15:55:22.474: INFO: Pod "downwardapi-volume-8ac25d72-5005-4766-ab04-109658fcc4c4": Phase="Pending", Reason="", readiness=false. Elapsed: 24.606799ms Jul 7 15:55:24.527: INFO: Pod "downwardapi-volume-8ac25d72-5005-4766-ab04-109658fcc4c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076840724s Jul 7 15:55:26.530: INFO: Pod "downwardapi-volume-8ac25d72-5005-4766-ab04-109658fcc4c4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080594638s Jul 7 15:55:28.694: INFO: Pod "downwardapi-volume-8ac25d72-5005-4766-ab04-109658fcc4c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.244142528s STEP: Saw pod success Jul 7 15:55:28.694: INFO: Pod "downwardapi-volume-8ac25d72-5005-4766-ab04-109658fcc4c4" satisfied condition "success or failure" Jul 7 15:55:28.696: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-8ac25d72-5005-4766-ab04-109658fcc4c4 container client-container: STEP: delete the pod Jul 7 15:55:28.745: INFO: Waiting for pod downwardapi-volume-8ac25d72-5005-4766-ab04-109658fcc4c4 to disappear Jul 7 15:55:28.814: INFO: Pod downwardapi-volume-8ac25d72-5005-4766-ab04-109658fcc4c4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:55:28.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1669" for this suite. • [SLOW TEST:6.487 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":77,"skipped":1257,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:55:28.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:55:40.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1240" for this suite. • [SLOW TEST:11.771 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":78,"skipped":1319,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:55:40.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 7 15:55:40.704: INFO: Waiting up to 5m0s for pod "busybox-user-65534-76a8ea8d-e889-493a-9891-94b9a3a4e054" in namespace "security-context-test-6175" to be "success or failure" Jul 7 15:55:40.751: INFO: Pod "busybox-user-65534-76a8ea8d-e889-493a-9891-94b9a3a4e054": Phase="Pending", Reason="", readiness=false. Elapsed: 46.649239ms Jul 7 15:55:42.756: INFO: Pod "busybox-user-65534-76a8ea8d-e889-493a-9891-94b9a3a4e054": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051199967s Jul 7 15:55:44.982: INFO: Pod "busybox-user-65534-76a8ea8d-e889-493a-9891-94b9a3a4e054": Phase="Pending", Reason="", readiness=false. Elapsed: 4.277227501s Jul 7 15:55:47.202: INFO: Pod "busybox-user-65534-76a8ea8d-e889-493a-9891-94b9a3a4e054": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.49812079s Jul 7 15:55:47.202: INFO: Pod "busybox-user-65534-76a8ea8d-e889-493a-9891-94b9a3a4e054" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:55:47.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6175" for this suite. • [SLOW TEST:6.616 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a container with runAsUser /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:43 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":79,"skipped":1342,"failed":0} SSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:55:47.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0707 15:55:49.228525 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 7 15:55:49.228: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:55:49.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4778" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":80,"skipped":1346,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:55:49.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-7be18a13-a886-49b5-a61d-2b14421bed1d STEP: Creating a pod to test consume configMaps Jul 7 15:55:49.714: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5177ceff-e447-47aa-8d0e-0594d89b2909" in namespace "projected-4718" to be "success or failure" Jul 7 15:55:49.993: INFO: Pod "pod-projected-configmaps-5177ceff-e447-47aa-8d0e-0594d89b2909": Phase="Pending", Reason="", readiness=false. Elapsed: 279.016771ms Jul 7 15:55:51.997: INFO: Pod "pod-projected-configmaps-5177ceff-e447-47aa-8d0e-0594d89b2909": Phase="Pending", Reason="", readiness=false. Elapsed: 2.283024891s Jul 7 15:55:54.041: INFO: Pod "pod-projected-configmaps-5177ceff-e447-47aa-8d0e-0594d89b2909": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327098486s Jul 7 15:55:56.418: INFO: Pod "pod-projected-configmaps-5177ceff-e447-47aa-8d0e-0594d89b2909": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.704360189s STEP: Saw pod success Jul 7 15:55:56.419: INFO: Pod "pod-projected-configmaps-5177ceff-e447-47aa-8d0e-0594d89b2909" satisfied condition "success or failure" Jul 7 15:55:56.421: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-5177ceff-e447-47aa-8d0e-0594d89b2909 container projected-configmap-volume-test: STEP: delete the pod Jul 7 15:55:56.943: INFO: Waiting for pod pod-projected-configmaps-5177ceff-e447-47aa-8d0e-0594d89b2909 to disappear Jul 7 15:55:57.025: INFO: Pod pod-projected-configmaps-5177ceff-e447-47aa-8d0e-0594d89b2909 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:55:57.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4718" for this suite. • [SLOW TEST:8.121 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":81,"skipped":1375,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:55:57.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jul 7 15:55:58.312: INFO: Waiting up to 5m0s for pod "downward-api-d3072824-f859-49e2-a162-770837ef3d0a" in namespace "downward-api-4322" to be "success or failure" Jul 7 15:55:58.361: INFO: Pod "downward-api-d3072824-f859-49e2-a162-770837ef3d0a": Phase="Pending", Reason="", readiness=false. Elapsed: 49.78613ms Jul 7 15:56:00.557: INFO: Pod "downward-api-d3072824-f859-49e2-a162-770837ef3d0a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.245565407s Jul 7 15:56:02.574: INFO: Pod "downward-api-d3072824-f859-49e2-a162-770837ef3d0a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.262273517s Jul 7 15:56:04.600: INFO: Pod "downward-api-d3072824-f859-49e2-a162-770837ef3d0a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.288228929s Jul 7 15:56:06.604: INFO: Pod "downward-api-d3072824-f859-49e2-a162-770837ef3d0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.292459436s STEP: Saw pod success Jul 7 15:56:06.604: INFO: Pod "downward-api-d3072824-f859-49e2-a162-770837ef3d0a" satisfied condition "success or failure" Jul 7 15:56:06.607: INFO: Trying to get logs from node jerma-worker2 pod downward-api-d3072824-f859-49e2-a162-770837ef3d0a container dapi-container: STEP: delete the pod Jul 7 15:56:06.673: INFO: Waiting for pod downward-api-d3072824-f859-49e2-a162-770837ef3d0a to disappear Jul 7 15:56:06.680: INFO: Pod downward-api-d3072824-f859-49e2-a162-770837ef3d0a no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:56:06.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4322" for this suite. • [SLOW TEST:9.328 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":82,"skipped":1429,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:56:06.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 7 15:56:06.790: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d42a46de-fa81-4b71-ad29-92be45a4c1ab" in namespace "projected-3227" to be "success or failure" Jul 7 15:56:06.794: INFO: Pod "downwardapi-volume-d42a46de-fa81-4b71-ad29-92be45a4c1ab": Phase="Pending", Reason="", readiness=false. Elapsed: 3.61833ms Jul 7 15:56:08.798: INFO: Pod "downwardapi-volume-d42a46de-fa81-4b71-ad29-92be45a4c1ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0079742s Jul 7 15:56:10.803: INFO: Pod "downwardapi-volume-d42a46de-fa81-4b71-ad29-92be45a4c1ab": Phase="Running", Reason="", readiness=true. Elapsed: 4.012249413s Jul 7 15:56:12.831: INFO: Pod "downwardapi-volume-d42a46de-fa81-4b71-ad29-92be45a4c1ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041117763s STEP: Saw pod success Jul 7 15:56:12.832: INFO: Pod "downwardapi-volume-d42a46de-fa81-4b71-ad29-92be45a4c1ab" satisfied condition "success or failure" Jul 7 15:56:12.834: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-d42a46de-fa81-4b71-ad29-92be45a4c1ab container client-container: STEP: delete the pod Jul 7 15:56:12.995: INFO: Waiting for pod downwardapi-volume-d42a46de-fa81-4b71-ad29-92be45a4c1ab to disappear Jul 7 15:56:13.197: INFO: Pod downwardapi-volume-d42a46de-fa81-4b71-ad29-92be45a4c1ab no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:56:13.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3227" for this suite. • [SLOW TEST:6.518 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":83,"skipped":1431,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:56:13.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1681 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jul 7 15:56:13.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5625' Jul 7 15:56:14.277: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jul 7 15:56:14.277: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1686 Jul 7 15:56:14.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-5625' Jul 7 15:56:14.876: INFO: stderr: "" Jul 7 15:56:14.876: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:56:14.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5625" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":84,"skipped":1438,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:56:14.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-01233e46-a912-42ec-8ddb-07f1ac426789 STEP: Creating a pod to test consume secrets Jul 7 15:56:15.361: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b119e542-19ef-4149-adf2-3f586bf5bc6e" in namespace "projected-8639" to be "success or failure" Jul 7 15:56:15.448: INFO: Pod "pod-projected-secrets-b119e542-19ef-4149-adf2-3f586bf5bc6e": Phase="Pending", Reason="", readiness=false. Elapsed: 86.352258ms Jul 7 15:56:17.452: INFO: Pod "pod-projected-secrets-b119e542-19ef-4149-adf2-3f586bf5bc6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090385782s Jul 7 15:56:19.535: INFO: Pod "pod-projected-secrets-b119e542-19ef-4149-adf2-3f586bf5bc6e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.173352858s Jul 7 15:56:21.617: INFO: Pod "pod-projected-secrets-b119e542-19ef-4149-adf2-3f586bf5bc6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.255024575s STEP: Saw pod success Jul 7 15:56:21.617: INFO: Pod "pod-projected-secrets-b119e542-19ef-4149-adf2-3f586bf5bc6e" satisfied condition "success or failure" Jul 7 15:56:21.620: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-b119e542-19ef-4149-adf2-3f586bf5bc6e container projected-secret-volume-test: STEP: delete the pod Jul 7 15:56:21.666: INFO: Waiting for pod pod-projected-secrets-b119e542-19ef-4149-adf2-3f586bf5bc6e to disappear Jul 7 15:56:21.695: INFO: Pod pod-projected-secrets-b119e542-19ef-4149-adf2-3f586bf5bc6e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:56:21.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8639" for this suite. • [SLOW TEST:6.709 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":85,"skipped":1458,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:56:21.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 7 15:56:23.163: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 7 15:56:25.466: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729734183, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729734183, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729734183, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729734183, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 7 15:56:28.520: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 7 15:56:28.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 7 15:56:29.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8082" for this suite. STEP: Destroying namespace "webhook-8082-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.099 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":86,"skipped":1459,"failed":0} SSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 7 15:56:29.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 7 15:56:30.182: INFO: (0) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Jul  7 15:56:35.180: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Jul  7 15:56:40.306: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 15:56:40.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6257" for this suite.

• [SLOW TEST:10.123 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":88,"skipped":1466,"failed":0}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 15:56:40.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul  7 15:56:40.463: INFO: Waiting up to 5m0s for pod "downwardapi-volume-49ce70a4-5f25-498b-acc4-d9f3f65ac0e1" in namespace "projected-3025" to be "success or failure"
Jul  7 15:56:40.503: INFO: Pod "downwardapi-volume-49ce70a4-5f25-498b-acc4-d9f3f65ac0e1": Phase="Pending", Reason="", readiness=false. Elapsed: 40.464805ms
Jul  7 15:56:42.595: INFO: Pod "downwardapi-volume-49ce70a4-5f25-498b-acc4-d9f3f65ac0e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132399514s
Jul  7 15:56:44.617: INFO: Pod "downwardapi-volume-49ce70a4-5f25-498b-acc4-d9f3f65ac0e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.154063903s
STEP: Saw pod success
Jul  7 15:56:44.617: INFO: Pod "downwardapi-volume-49ce70a4-5f25-498b-acc4-d9f3f65ac0e1" satisfied condition "success or failure"
Jul  7 15:56:44.620: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-49ce70a4-5f25-498b-acc4-d9f3f65ac0e1 container client-container: 
STEP: delete the pod
Jul  7 15:56:44.650: INFO: Waiting for pod downwardapi-volume-49ce70a4-5f25-498b-acc4-d9f3f65ac0e1 to disappear
Jul  7 15:56:44.667: INFO: Pod downwardapi-volume-49ce70a4-5f25-498b-acc4-d9f3f65ac0e1 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 15:56:44.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3025" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":89,"skipped":1469,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 15:56:44.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-f06cacb4-8ec3-4038-b1f4-9f7a6943b875 in namespace container-probe-1924
Jul  7 15:56:53.318: INFO: Started pod busybox-f06cacb4-8ec3-4038-b1f4-9f7a6943b875 in namespace container-probe-1924
STEP: checking the pod's current state and verifying that restartCount is present
Jul  7 15:56:53.635: INFO: Initial restart count of pod busybox-f06cacb4-8ec3-4038-b1f4-9f7a6943b875 is 0
Jul  7 15:57:42.019: INFO: Restart count of pod container-probe-1924/busybox-f06cacb4-8ec3-4038-b1f4-9f7a6943b875 is now 1 (48.383838222s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 15:57:42.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1924" for this suite.

• [SLOW TEST:57.682 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1492,"failed":0}
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 15:57:42.358: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-40e2fb52-452e-4708-a3cd-2ebb21ccaa94
STEP: Creating a pod to test consume configMaps
Jul  7 15:57:43.097: INFO: Waiting up to 5m0s for pod "pod-configmaps-c237667c-6993-4f18-a607-a73a9e36db46" in namespace "configmap-2" to be "success or failure"
Jul  7 15:57:43.099: INFO: Pod "pod-configmaps-c237667c-6993-4f18-a607-a73a9e36db46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.429429ms
Jul  7 15:57:45.247: INFO: Pod "pod-configmaps-c237667c-6993-4f18-a607-a73a9e36db46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149991369s
Jul  7 15:57:47.251: INFO: Pod "pod-configmaps-c237667c-6993-4f18-a607-a73a9e36db46": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153658154s
Jul  7 15:57:49.412: INFO: Pod "pod-configmaps-c237667c-6993-4f18-a607-a73a9e36db46": Phase="Pending", Reason="", readiness=false. Elapsed: 6.315144633s
Jul  7 15:57:51.417: INFO: Pod "pod-configmaps-c237667c-6993-4f18-a607-a73a9e36db46": Phase="Running", Reason="", readiness=true. Elapsed: 8.319878003s
Jul  7 15:57:53.421: INFO: Pod "pod-configmaps-c237667c-6993-4f18-a607-a73a9e36db46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.323600271s
STEP: Saw pod success
Jul  7 15:57:53.421: INFO: Pod "pod-configmaps-c237667c-6993-4f18-a607-a73a9e36db46" satisfied condition "success or failure"
Jul  7 15:57:53.423: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-c237667c-6993-4f18-a607-a73a9e36db46 container configmap-volume-test: 
STEP: delete the pod
Jul  7 15:57:53.629: INFO: Waiting for pod pod-configmaps-c237667c-6993-4f18-a607-a73a9e36db46 to disappear
Jul  7 15:57:53.663: INFO: Pod pod-configmaps-c237667c-6993-4f18-a607-a73a9e36db46 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 15:57:53.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2" for this suite.

• [SLOW TEST:11.405 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1499,"failed":0}
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 15:57:53.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2155.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-2155.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2155.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-2155.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2155.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2155.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-2155.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2155.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-2155.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2155.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul  7 15:58:06.411: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:06.414: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:06.417: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:06.420: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:06.432: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:06.435: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:06.437: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:06.439: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:06.444: INFO: Lookups using dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2155.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2155.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local jessie_udp@dns-test-service-2.dns-2155.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2155.svc.cluster.local]

Jul  7 15:58:11.449: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:11.452: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:11.455: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:11.458: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:11.466: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:11.469: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:11.472: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:11.475: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:11.480: INFO: Lookups using dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2155.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2155.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local jessie_udp@dns-test-service-2.dns-2155.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2155.svc.cluster.local]

Jul  7 15:58:16.449: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:16.453: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:16.456: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:16.460: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:16.469: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:16.472: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:16.475: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:16.478: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:16.483: INFO: Lookups using dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2155.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2155.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local jessie_udp@dns-test-service-2.dns-2155.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2155.svc.cluster.local]

Jul  7 15:58:21.504: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:21.508: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:21.511: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:21.514: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:21.522: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:21.525: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:21.528: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:21.530: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:21.536: INFO: Lookups using dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2155.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2155.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local jessie_udp@dns-test-service-2.dns-2155.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2155.svc.cluster.local]

Jul  7 15:58:26.449: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:26.452: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:26.455: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:26.458: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:26.466: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:26.469: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:26.471: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:26.474: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:26.479: INFO: Lookups using dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2155.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2155.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local jessie_udp@dns-test-service-2.dns-2155.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2155.svc.cluster.local]

Jul  7 15:58:31.469: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:31.472: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:31.475: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:31.478: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:31.486: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:31.489: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:31.491: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:31.494: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2155.svc.cluster.local from pod dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6: the server could not find the requested resource (get pods dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6)
Jul  7 15:58:31.498: INFO: Lookups using dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2155.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2155.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local jessie_udp@dns-test-service-2.dns-2155.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2155.svc.cluster.local]

Jul  7 15:58:36.489: INFO: DNS probes using dns-2155/dns-test-9ca3bd16-a997-4202-abc1-12bf8601d2f6 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 15:58:37.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2155" for this suite.

• [SLOW TEST:44.288 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":92,"skipped":1499,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 15:58:38.051: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-7ae1ea50-b361-4da0-965d-328859f53357
STEP: Creating a pod to test consume configMaps
Jul  7 15:58:39.723: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fc4e2985-0359-41cc-92c9-3f404b118e62" in namespace "projected-1554" to be "success or failure"
Jul  7 15:58:39.812: INFO: Pod "pod-projected-configmaps-fc4e2985-0359-41cc-92c9-3f404b118e62": Phase="Pending", Reason="", readiness=false. Elapsed: 89.317916ms
Jul  7 15:58:41.815: INFO: Pod "pod-projected-configmaps-fc4e2985-0359-41cc-92c9-3f404b118e62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092864247s
Jul  7 15:58:43.973: INFO: Pod "pod-projected-configmaps-fc4e2985-0359-41cc-92c9-3f404b118e62": Phase="Pending", Reason="", readiness=false. Elapsed: 4.250529205s
Jul  7 15:58:45.975: INFO: Pod "pod-projected-configmaps-fc4e2985-0359-41cc-92c9-3f404b118e62": Phase="Running", Reason="", readiness=true. Elapsed: 6.252818441s
Jul  7 15:58:47.979: INFO: Pod "pod-projected-configmaps-fc4e2985-0359-41cc-92c9-3f404b118e62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.256500737s
STEP: Saw pod success
Jul  7 15:58:47.979: INFO: Pod "pod-projected-configmaps-fc4e2985-0359-41cc-92c9-3f404b118e62" satisfied condition "success or failure"
Jul  7 15:58:47.996: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-fc4e2985-0359-41cc-92c9-3f404b118e62 container projected-configmap-volume-test: 
STEP: delete the pod
Jul  7 15:58:48.034: INFO: Waiting for pod pod-projected-configmaps-fc4e2985-0359-41cc-92c9-3f404b118e62 to disappear
Jul  7 15:58:48.038: INFO: Pod pod-projected-configmaps-fc4e2985-0359-41cc-92c9-3f404b118e62 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 15:58:48.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1554" for this suite.

• [SLOW TEST:9.994 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1509,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 15:58:48.046: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 15:59:06.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-5987" for this suite.
STEP: Destroying namespace "nsdeletetest-8112" for this suite.
Jul  7 15:59:07.023: INFO: Namespace nsdeletetest-8112 was already deleted
STEP: Destroying namespace "nsdeletetest-1191" for this suite.

• [SLOW TEST:18.981 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":94,"skipped":1545,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 15:59:07.027: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  7 15:59:09.349: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  7 15:59:11.851: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729734349, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729734349, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729734349, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729734349, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 15:59:13.858: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729734349, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729734349, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729734349, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729734349, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  7 15:59:16.891: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 15:59:17.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5294" for this suite.
STEP: Destroying namespace "webhook-5294-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.827 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":95,"skipped":1558,"failed":0}
SSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 15:59:17.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Jul  7 15:59:18.248: INFO: Created pod &Pod{ObjectMeta:{dns-1148  dns-1148 /api/v1/namespaces/dns-1148/pods/dns-1148 792d6b5b-10bc-4da4-9d0d-55cd80775084 936106 0 2020-07-07 15:59:18 +0000 UTC   map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9kjw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9kjw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9kjw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: Verifying customized DNS suffix list is configured on pod...
Jul  7 15:59:26.303: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-1148 PodName:dns-1148 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 15:59:26.303: INFO: >>> kubeConfig: /root/.kube/config
I0707 15:59:26.422654       6 log.go:172] (0xc0028ea630) (0xc0026dd2c0) Create stream
I0707 15:59:26.422678       6 log.go:172] (0xc0028ea630) (0xc0026dd2c0) Stream added, broadcasting: 1
I0707 15:59:26.424677       6 log.go:172] (0xc0028ea630) Reply frame received for 1
I0707 15:59:26.424742       6 log.go:172] (0xc0028ea630) (0xc00285fd60) Create stream
I0707 15:59:26.424755       6 log.go:172] (0xc0028ea630) (0xc00285fd60) Stream added, broadcasting: 3
I0707 15:59:26.425687       6 log.go:172] (0xc0028ea630) Reply frame received for 3
I0707 15:59:26.425713       6 log.go:172] (0xc0028ea630) (0xc0026dd360) Create stream
I0707 15:59:26.425723       6 log.go:172] (0xc0028ea630) (0xc0026dd360) Stream added, broadcasting: 5
I0707 15:59:26.426417       6 log.go:172] (0xc0028ea630) Reply frame received for 5
I0707 15:59:26.501148       6 log.go:172] (0xc0028ea630) Data frame received for 3
I0707 15:59:26.501206       6 log.go:172] (0xc00285fd60) (3) Data frame handling
I0707 15:59:26.501243       6 log.go:172] (0xc00285fd60) (3) Data frame sent
I0707 15:59:26.502443       6 log.go:172] (0xc0028ea630) Data frame received for 5
I0707 15:59:26.502478       6 log.go:172] (0xc0028ea630) Data frame received for 3
I0707 15:59:26.502499       6 log.go:172] (0xc00285fd60) (3) Data frame handling
I0707 15:59:26.502532       6 log.go:172] (0xc0026dd360) (5) Data frame handling
I0707 15:59:26.504414       6 log.go:172] (0xc0028ea630) Data frame received for 1
I0707 15:59:26.504452       6 log.go:172] (0xc0026dd2c0) (1) Data frame handling
I0707 15:59:26.504490       6 log.go:172] (0xc0026dd2c0) (1) Data frame sent
I0707 15:59:26.504518       6 log.go:172] (0xc0028ea630) (0xc0026dd2c0) Stream removed, broadcasting: 1
I0707 15:59:26.504540       6 log.go:172] (0xc0028ea630) Go away received
I0707 15:59:26.504761       6 log.go:172] (0xc0028ea630) (0xc0026dd2c0) Stream removed, broadcasting: 1
I0707 15:59:26.504800       6 log.go:172] (0xc0028ea630) (0xc00285fd60) Stream removed, broadcasting: 3
I0707 15:59:26.504833       6 log.go:172] (0xc0028ea630) (0xc0026dd360) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
Jul  7 15:59:26.504: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-1148 PodName:dns-1148 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 15:59:26.504: INFO: >>> kubeConfig: /root/.kube/config
I0707 15:59:26.546986       6 log.go:172] (0xc002398630) (0xc0019e8000) Create stream
I0707 15:59:26.547018       6 log.go:172] (0xc002398630) (0xc0019e8000) Stream added, broadcasting: 1
I0707 15:59:26.550163       6 log.go:172] (0xc002398630) Reply frame received for 1
I0707 15:59:26.550195       6 log.go:172] (0xc002398630) (0xc0027e92c0) Create stream
I0707 15:59:26.550205       6 log.go:172] (0xc002398630) (0xc0027e92c0) Stream added, broadcasting: 3
I0707 15:59:26.551281       6 log.go:172] (0xc002398630) Reply frame received for 3
I0707 15:59:26.551305       6 log.go:172] (0xc002398630) (0xc0027e9360) Create stream
I0707 15:59:26.551313       6 log.go:172] (0xc002398630) (0xc0027e9360) Stream added, broadcasting: 5
I0707 15:59:26.552361       6 log.go:172] (0xc002398630) Reply frame received for 5
I0707 15:59:26.633031       6 log.go:172] (0xc002398630) Data frame received for 3
I0707 15:59:26.633068       6 log.go:172] (0xc0027e92c0) (3) Data frame handling
I0707 15:59:26.633089       6 log.go:172] (0xc0027e92c0) (3) Data frame sent
I0707 15:59:26.634493       6 log.go:172] (0xc002398630) Data frame received for 3
I0707 15:59:26.634525       6 log.go:172] (0xc0027e92c0) (3) Data frame handling
I0707 15:59:26.634562       6 log.go:172] (0xc002398630) Data frame received for 5
I0707 15:59:26.634609       6 log.go:172] (0xc0027e9360) (5) Data frame handling
I0707 15:59:26.635878       6 log.go:172] (0xc002398630) Data frame received for 1
I0707 15:59:26.635909       6 log.go:172] (0xc0019e8000) (1) Data frame handling
I0707 15:59:26.635931       6 log.go:172] (0xc0019e8000) (1) Data frame sent
I0707 15:59:26.635948       6 log.go:172] (0xc002398630) (0xc0019e8000) Stream removed, broadcasting: 1
I0707 15:59:26.635964       6 log.go:172] (0xc002398630) Go away received
I0707 15:59:26.636356       6 log.go:172] (0xc002398630) (0xc0019e8000) Stream removed, broadcasting: 1
I0707 15:59:26.636368       6 log.go:172] (0xc002398630) (0xc0027e92c0) Stream removed, broadcasting: 3
I0707 15:59:26.636375       6 log.go:172] (0xc002398630) (0xc0027e9360) Stream removed, broadcasting: 5
Jul  7 15:59:26.636: INFO: Deleting pod dns-1148...
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 15:59:26.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1148" for this suite.

• [SLOW TEST:8.957 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":96,"skipped":1562,"failed":0}
S
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 15:59:26.811: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul  7 15:59:27.367: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3bc54e2c-1b54-42b6-b0ba-0411a8547cb5" in namespace "downward-api-1416" to be "success or failure"
Jul  7 15:59:27.393: INFO: Pod "downwardapi-volume-3bc54e2c-1b54-42b6-b0ba-0411a8547cb5": Phase="Pending", Reason="", readiness=false. Elapsed: 26.101715ms
Jul  7 15:59:29.398: INFO: Pod "downwardapi-volume-3bc54e2c-1b54-42b6-b0ba-0411a8547cb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030509652s
Jul  7 15:59:31.402: INFO: Pod "downwardapi-volume-3bc54e2c-1b54-42b6-b0ba-0411a8547cb5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034142286s
Jul  7 15:59:33.527: INFO: Pod "downwardapi-volume-3bc54e2c-1b54-42b6-b0ba-0411a8547cb5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.159750696s
Jul  7 15:59:35.536: INFO: Pod "downwardapi-volume-3bc54e2c-1b54-42b6-b0ba-0411a8547cb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.168807976s
STEP: Saw pod success
Jul  7 15:59:35.536: INFO: Pod "downwardapi-volume-3bc54e2c-1b54-42b6-b0ba-0411a8547cb5" satisfied condition "success or failure"
Jul  7 15:59:35.540: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-3bc54e2c-1b54-42b6-b0ba-0411a8547cb5 container client-container: 
STEP: delete the pod
Jul  7 15:59:35.711: INFO: Waiting for pod downwardapi-volume-3bc54e2c-1b54-42b6-b0ba-0411a8547cb5 to disappear
Jul  7 15:59:35.722: INFO: Pod downwardapi-volume-3bc54e2c-1b54-42b6-b0ba-0411a8547cb5 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 15:59:35.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1416" for this suite.

• [SLOW TEST:8.917 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":97,"skipped":1563,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 15:59:35.728: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override all
Jul  7 15:59:35.869: INFO: Waiting up to 5m0s for pod "client-containers-b729a36d-b2c0-4ca0-936d-bd5bebc50416" in namespace "containers-9059" to be "success or failure"
Jul  7 15:59:35.872: INFO: Pod "client-containers-b729a36d-b2c0-4ca0-936d-bd5bebc50416": Phase="Pending", Reason="", readiness=false. Elapsed: 3.137109ms
Jul  7 15:59:38.186: INFO: Pod "client-containers-b729a36d-b2c0-4ca0-936d-bd5bebc50416": Phase="Pending", Reason="", readiness=false. Elapsed: 2.316848587s
Jul  7 15:59:40.517: INFO: Pod "client-containers-b729a36d-b2c0-4ca0-936d-bd5bebc50416": Phase="Pending", Reason="", readiness=false. Elapsed: 4.64829649s
Jul  7 15:59:42.521: INFO: Pod "client-containers-b729a36d-b2c0-4ca0-936d-bd5bebc50416": Phase="Pending", Reason="", readiness=false. Elapsed: 6.652172129s
Jul  7 15:59:44.583: INFO: Pod "client-containers-b729a36d-b2c0-4ca0-936d-bd5bebc50416": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.71439605s
STEP: Saw pod success
Jul  7 15:59:44.583: INFO: Pod "client-containers-b729a36d-b2c0-4ca0-936d-bd5bebc50416" satisfied condition "success or failure"
Jul  7 15:59:44.629: INFO: Trying to get logs from node jerma-worker2 pod client-containers-b729a36d-b2c0-4ca0-936d-bd5bebc50416 container test-container: 
STEP: delete the pod
Jul  7 15:59:44.783: INFO: Waiting for pod client-containers-b729a36d-b2c0-4ca0-936d-bd5bebc50416 to disappear
Jul  7 15:59:44.826: INFO: Pod client-containers-b729a36d-b2c0-4ca0-936d-bd5bebc50416 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 15:59:44.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9059" for this suite.

• [SLOW TEST:9.103 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":98,"skipped":1572,"failed":0}
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 15:59:44.832: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-2939
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Jul  7 15:59:45.587: INFO: Found 0 stateful pods, waiting for 3
Jul  7 15:59:55.592: INFO: Found 2 stateful pods, waiting for 3
Jul  7 16:00:05.991: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul  7 16:00:05.991: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul  7 16:00:05.991: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jul  7 16:00:06.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2939 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul  7 16:00:06.872: INFO: stderr: "I0707 16:00:06.759067     892 log.go:172] (0xc000ac6000) (0xc000665ae0) Create stream\nI0707 16:00:06.759113     892 log.go:172] (0xc000ac6000) (0xc000665ae0) Stream added, broadcasting: 1\nI0707 16:00:06.761567     892 log.go:172] (0xc000ac6000) Reply frame received for 1\nI0707 16:00:06.761590     892 log.go:172] (0xc000ac6000) (0xc000ab0000) Create stream\nI0707 16:00:06.761597     892 log.go:172] (0xc000ac6000) (0xc000ab0000) Stream added, broadcasting: 3\nI0707 16:00:06.762413     892 log.go:172] (0xc000ac6000) Reply frame received for 3\nI0707 16:00:06.762450     892 log.go:172] (0xc000ac6000) (0xc000ab00a0) Create stream\nI0707 16:00:06.762462     892 log.go:172] (0xc000ac6000) (0xc000ab00a0) Stream added, broadcasting: 5\nI0707 16:00:06.763112     892 log.go:172] (0xc000ac6000) Reply frame received for 5\nI0707 16:00:06.815600     892 log.go:172] (0xc000ac6000) Data frame received for 5\nI0707 16:00:06.815628     892 log.go:172] (0xc000ab00a0) (5) Data frame handling\nI0707 16:00:06.815648     892 log.go:172] (0xc000ab00a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0707 16:00:06.866018     892 log.go:172] (0xc000ac6000) Data frame received for 3\nI0707 16:00:06.866051     892 log.go:172] (0xc000ab0000) (3) Data frame handling\nI0707 16:00:06.866076     892 log.go:172] (0xc000ab0000) (3) Data frame sent\nI0707 16:00:06.866087     892 log.go:172] (0xc000ac6000) Data frame received for 3\nI0707 16:00:06.866094     892 log.go:172] (0xc000ab0000) (3) Data frame handling\nI0707 16:00:06.866269     892 log.go:172] (0xc000ac6000) Data frame received for 5\nI0707 16:00:06.866294     892 log.go:172] (0xc000ab00a0) (5) Data frame handling\nI0707 16:00:06.868068     892 log.go:172] (0xc000ac6000) Data frame received for 1\nI0707 16:00:06.868096     892 log.go:172] (0xc000665ae0) (1) Data frame handling\nI0707 16:00:06.868113     892 log.go:172] (0xc000665ae0) (1) Data frame sent\nI0707 16:00:06.868132     892 log.go:172] (0xc000ac6000) (0xc000665ae0) Stream removed, broadcasting: 1\nI0707 16:00:06.868157     892 log.go:172] (0xc000ac6000) Go away received\nI0707 16:00:06.868477     892 log.go:172] (0xc000ac6000) (0xc000665ae0) Stream removed, broadcasting: 1\nI0707 16:00:06.868490     892 log.go:172] (0xc000ac6000) (0xc000ab0000) Stream removed, broadcasting: 3\nI0707 16:00:06.868496     892 log.go:172] (0xc000ac6000) (0xc000ab00a0) Stream removed, broadcasting: 5\n"
Jul  7 16:00:06.873: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul  7 16:00:06.873: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Jul  7 16:00:16.904: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jul  7 16:00:27.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2939 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  7 16:00:27.733: INFO: stderr: "I0707 16:00:27.641613     912 log.go:172] (0xc0001042c0) (0xc000976000) Create stream\nI0707 16:00:27.641681     912 log.go:172] (0xc0001042c0) (0xc000976000) Stream added, broadcasting: 1\nI0707 16:00:27.644274     912 log.go:172] (0xc0001042c0) Reply frame received for 1\nI0707 16:00:27.644305     912 log.go:172] (0xc0001042c0) (0xc000695a40) Create stream\nI0707 16:00:27.644313     912 log.go:172] (0xc0001042c0) (0xc000695a40) Stream added, broadcasting: 3\nI0707 16:00:27.645352     912 log.go:172] (0xc0001042c0) Reply frame received for 3\nI0707 16:00:27.645396     912 log.go:172] (0xc0001042c0) (0xc0002ac000) Create stream\nI0707 16:00:27.645408     912 log.go:172] (0xc0001042c0) (0xc0002ac000) Stream added, broadcasting: 5\nI0707 16:00:27.646254     912 log.go:172] (0xc0001042c0) Reply frame received for 5\nI0707 16:00:27.700615     912 log.go:172] (0xc0001042c0) Data frame received for 5\nI0707 16:00:27.700632     912 log.go:172] (0xc0002ac000) (5) Data frame handling\nI0707 16:00:27.700642     912 log.go:172] (0xc0002ac000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0707 16:00:27.723132     912 log.go:172] (0xc0001042c0) Data frame received for 5\nI0707 16:00:27.723149     912 log.go:172] (0xc0002ac000) (5) Data frame handling\nI0707 16:00:27.723215     912 log.go:172] (0xc0001042c0) Data frame received for 3\nI0707 16:00:27.723252     912 log.go:172] (0xc000695a40) (3) Data frame handling\nI0707 16:00:27.723284     912 log.go:172] (0xc000695a40) (3) Data frame sent\nI0707 16:00:27.724449     912 log.go:172] (0xc0001042c0) Data frame received for 3\nI0707 16:00:27.724461     912 log.go:172] (0xc000695a40) (3) Data frame handling\nI0707 16:00:27.725729     912 log.go:172] (0xc0001042c0) Data frame received for 1\nI0707 16:00:27.725747     912 log.go:172] (0xc000976000) (1) Data frame handling\nI0707 16:00:27.725757     912 log.go:172] (0xc000976000) (1) Data frame sent\nI0707 16:00:27.725769     912 log.go:172] (0xc0001042c0) (0xc000976000) Stream removed, broadcasting: 1\nI0707 16:00:27.725788     912 log.go:172] (0xc0001042c0) Go away received\nI0707 16:00:27.726225     912 log.go:172] (0xc0001042c0) (0xc000976000) Stream removed, broadcasting: 1\nI0707 16:00:27.726247     912 log.go:172] (0xc0001042c0) (0xc000695a40) Stream removed, broadcasting: 3\nI0707 16:00:27.726262     912 log.go:172] (0xc0001042c0) (0xc0002ac000) Stream removed, broadcasting: 5\n"
Jul  7 16:00:27.734: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul  7 16:00:27.734: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul  7 16:00:37.799: INFO: Waiting for StatefulSet statefulset-2939/ss2 to complete update
Jul  7 16:00:37.799: INFO: Waiting for Pod statefulset-2939/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  7 16:00:37.799: INFO: Waiting for Pod statefulset-2939/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  7 16:00:47.807: INFO: Waiting for StatefulSet statefulset-2939/ss2 to complete update
Jul  7 16:00:47.807: INFO: Waiting for Pod statefulset-2939/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  7 16:00:57.805: INFO: Waiting for StatefulSet statefulset-2939/ss2 to complete update
Jul  7 16:00:57.805: INFO: Waiting for Pod statefulset-2939/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  7 16:01:07.912: INFO: Waiting for StatefulSet statefulset-2939/ss2 to complete update
STEP: Rolling back to a previous revision
Jul  7 16:01:17.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2939 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul  7 16:01:18.529: INFO: stderr: "I0707 16:01:17.928046     932 log.go:172] (0xc000117e40) (0xc0008d88c0) Create stream\nI0707 16:01:17.928109     932 log.go:172] (0xc000117e40) (0xc0008d88c0) Stream added, broadcasting: 1\nI0707 16:01:17.933340     932 log.go:172] (0xc000117e40) Reply frame received for 1\nI0707 16:01:17.933375     932 log.go:172] (0xc000117e40) (0xc000634640) Create stream\nI0707 16:01:17.933385     932 log.go:172] (0xc000117e40) (0xc000634640) Stream added, broadcasting: 3\nI0707 16:01:17.934392     932 log.go:172] (0xc000117e40) Reply frame received for 3\nI0707 16:01:17.934417     932 log.go:172] (0xc000117e40) (0xc0002fd400) Create stream\nI0707 16:01:17.934424     932 log.go:172] (0xc000117e40) (0xc0002fd400) Stream added, broadcasting: 5\nI0707 16:01:17.935275     932 log.go:172] (0xc000117e40) Reply frame received for 5\nI0707 16:01:17.989483     932 log.go:172] (0xc000117e40) Data frame received for 5\nI0707 16:01:17.989506     932 log.go:172] (0xc0002fd400) (5) Data frame handling\nI0707 16:01:17.989528     932 log.go:172] (0xc0002fd400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0707 16:01:18.520120     932 log.go:172] (0xc000117e40) Data frame received for 3\nI0707 16:01:18.520153     932 log.go:172] (0xc000634640) (3) Data frame handling\nI0707 16:01:18.520180     932 log.go:172] (0xc000634640) (3) Data frame sent\nI0707 16:01:18.520648     932 log.go:172] (0xc000117e40) Data frame received for 3\nI0707 16:01:18.520666     932 log.go:172] (0xc000634640) (3) Data frame handling\nI0707 16:01:18.520722     932 log.go:172] (0xc000117e40) Data frame received for 5\nI0707 16:01:18.520782     932 log.go:172] (0xc0002fd400) (5) Data frame handling\nI0707 16:01:18.523243     932 log.go:172] (0xc000117e40) Data frame received for 1\nI0707 16:01:18.523261     932 log.go:172] (0xc0008d88c0) (1) Data frame handling\nI0707 16:01:18.523273     932 log.go:172] (0xc0008d88c0) (1) Data frame sent\nI0707 16:01:18.523332     932 log.go:172] (0xc000117e40) (0xc0008d88c0) Stream removed, broadcasting: 1\nI0707 16:01:18.523396     932 log.go:172] (0xc000117e40) Go away received\nI0707 16:01:18.523804     932 log.go:172] (0xc000117e40) (0xc0008d88c0) Stream removed, broadcasting: 1\nI0707 16:01:18.523827     932 log.go:172] (0xc000117e40) (0xc000634640) Stream removed, broadcasting: 3\nI0707 16:01:18.523846     932 log.go:172] (0xc000117e40) (0xc0002fd400) Stream removed, broadcasting: 5\n"
Jul  7 16:01:18.529: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul  7 16:01:18.529: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul  7 16:01:28.644: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jul  7 16:01:38.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2939 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  7 16:01:39.497: INFO: stderr: "I0707 16:01:39.409886     952 log.go:172] (0xc000a22f20) (0xc000962460) Create stream\nI0707 16:01:39.409954     952 log.go:172] (0xc000a22f20) (0xc000962460) Stream added, broadcasting: 1\nI0707 16:01:39.412956     952 log.go:172] (0xc000a22f20) Reply frame received for 1\nI0707 16:01:39.413039     952 log.go:172] (0xc000a22f20) (0xc000946000) Create stream\nI0707 16:01:39.413081     952 log.go:172] (0xc000a22f20) (0xc000946000) Stream added, broadcasting: 3\nI0707 16:01:39.414493     952 log.go:172] (0xc000a22f20) Reply frame received for 3\nI0707 16:01:39.414538     952 log.go:172] (0xc000a22f20) (0xc00099c000) Create stream\nI0707 16:01:39.414556     952 log.go:172] (0xc000a22f20) (0xc00099c000) Stream added, broadcasting: 5\nI0707 16:01:39.415498     952 log.go:172] (0xc000a22f20) Reply frame received for 5\nI0707 16:01:39.486264     952 log.go:172] (0xc000a22f20) Data frame received for 5\nI0707 16:01:39.486311     952 log.go:172] (0xc00099c000) (5) Data frame handling\nI0707 16:01:39.486340     952 log.go:172] (0xc00099c000) (5) Data frame sent\nI0707 16:01:39.486355     952 log.go:172] (0xc000a22f20) Data frame received for 5\nI0707 16:01:39.486368     952 log.go:172] (0xc00099c000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0707 16:01:39.490473     952 log.go:172] (0xc000a22f20) Data frame received for 3\nI0707 16:01:39.490504     952 log.go:172] (0xc000946000) (3) Data frame handling\nI0707 16:01:39.490532     952 log.go:172] (0xc000946000) (3) Data frame sent\nI0707 16:01:39.490559     952 log.go:172] (0xc000a22f20) Data frame received for 3\nI0707 16:01:39.490578     952 log.go:172] (0xc000946000) (3) Data frame handling\nI0707 16:01:39.491841     952 log.go:172] (0xc000a22f20) Data frame received for 1\nI0707 16:01:39.491920     952 log.go:172] (0xc000962460) (1) Data frame handling\nI0707 16:01:39.491957     952 log.go:172] (0xc000962460) (1) Data frame sent\nI0707 16:01:39.492024     952 log.go:172] (0xc000a22f20) (0xc000962460) Stream removed, broadcasting: 1\nI0707 16:01:39.492053     952 log.go:172] (0xc000a22f20) Go away received\nI0707 16:01:39.492806     952 log.go:172] (0xc000a22f20) (0xc000962460) Stream removed, broadcasting: 1\nI0707 16:01:39.492843     952 log.go:172] (0xc000a22f20) (0xc000946000) Stream removed, broadcasting: 3\nI0707 16:01:39.492862     952 log.go:172] (0xc000a22f20) (0xc00099c000) Stream removed, broadcasting: 5\n"
Jul  7 16:01:39.497: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul  7 16:01:39.497: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul  7 16:02:09.982: INFO: Waiting for StatefulSet statefulset-2939/ss2 to complete update
Jul  7 16:02:09.982: INFO: Waiting for Pod statefulset-2939/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jul  7 16:02:20.265: INFO: Waiting for StatefulSet statefulset-2939/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jul  7 16:02:30.681: INFO: Deleting all statefulset in ns statefulset-2939
Jul  7 16:02:31.293: INFO: Scaling statefulset ss2 to 0
Jul  7 16:03:01.784: INFO: Waiting for statefulset status.replicas updated to 0
Jul  7 16:03:01.787: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:03:01.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2939" for this suite.

• [SLOW TEST:197.030 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":99,"skipped":1572,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:03:01.862: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul  7 16:03:02.139: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c1b65aec-8618-4715-ba0f-fdce6f663ac4" in namespace "projected-4469" to be "success or failure"
Jul  7 16:03:02.383: INFO: Pod "downwardapi-volume-c1b65aec-8618-4715-ba0f-fdce6f663ac4": Phase="Pending", Reason="", readiness=false. Elapsed: 243.870861ms
Jul  7 16:03:04.388: INFO: Pod "downwardapi-volume-c1b65aec-8618-4715-ba0f-fdce6f663ac4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.24835359s
Jul  7 16:03:06.547: INFO: Pod "downwardapi-volume-c1b65aec-8618-4715-ba0f-fdce6f663ac4": Phase="Running", Reason="", readiness=true. Elapsed: 4.407084355s
Jul  7 16:03:08.550: INFO: Pod "downwardapi-volume-c1b65aec-8618-4715-ba0f-fdce6f663ac4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.410991287s
STEP: Saw pod success
Jul  7 16:03:08.550: INFO: Pod "downwardapi-volume-c1b65aec-8618-4715-ba0f-fdce6f663ac4" satisfied condition "success or failure"
Jul  7 16:03:08.573: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-c1b65aec-8618-4715-ba0f-fdce6f663ac4 container client-container: 
STEP: delete the pod
Jul  7 16:03:08.700: INFO: Waiting for pod downwardapi-volume-c1b65aec-8618-4715-ba0f-fdce6f663ac4 to disappear
Jul  7 16:03:08.796: INFO: Pod downwardapi-volume-c1b65aec-8618-4715-ba0f-fdce6f663ac4 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:03:08.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4469" for this suite.

• [SLOW TEST:7.089 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":100,"skipped":1583,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:03:08.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  7 16:03:09.611: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  7 16:03:11.640: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729734589, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729734589, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729734589, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729734589, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 16:03:13.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729734589, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729734589, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729734589, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729734589, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 16:03:15.826: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729734589, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729734589, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729734589, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729734589, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 16:03:17.827: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729734589, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729734589, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729734589, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729734589, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 16:03:19.887: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729734589, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729734589, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729734589, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729734589, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 16:03:21.643: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729734589, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729734589, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729734589, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729734589, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  7 16:03:24.793: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:03:25.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8983" for this suite.
STEP: Destroying namespace "webhook-8983-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:19.199 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":101,"skipped":1647,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:03:28.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-8445/configmap-test-de55df39-faac-4f49-a727-ed3a635eed1e
STEP: Creating a pod to test consume configMaps
Jul  7 16:03:31.067: INFO: Waiting up to 5m0s for pod "pod-configmaps-5fa3c125-03f5-4892-b002-0044e9ee561e" in namespace "configmap-8445" to be "success or failure"
Jul  7 16:03:31.161: INFO: Pod "pod-configmaps-5fa3c125-03f5-4892-b002-0044e9ee561e": Phase="Pending", Reason="", readiness=false. Elapsed: 93.862406ms
Jul  7 16:03:33.383: INFO: Pod "pod-configmaps-5fa3c125-03f5-4892-b002-0044e9ee561e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.315887119s
Jul  7 16:03:36.187: INFO: Pod "pod-configmaps-5fa3c125-03f5-4892-b002-0044e9ee561e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.119184381s
Jul  7 16:03:38.431: INFO: Pod "pod-configmaps-5fa3c125-03f5-4892-b002-0044e9ee561e": Phase="Running", Reason="", readiness=true. Elapsed: 7.363780221s
Jul  7 16:03:40.436: INFO: Pod "pod-configmaps-5fa3c125-03f5-4892-b002-0044e9ee561e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.368138726s
STEP: Saw pod success
Jul  7 16:03:40.436: INFO: Pod "pod-configmaps-5fa3c125-03f5-4892-b002-0044e9ee561e" satisfied condition "success or failure"
Jul  7 16:03:40.439: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-5fa3c125-03f5-4892-b002-0044e9ee561e container env-test: 
STEP: delete the pod
Jul  7 16:03:40.635: INFO: Waiting for pod pod-configmaps-5fa3c125-03f5-4892-b002-0044e9ee561e to disappear
Jul  7 16:03:40.935: INFO: Pod pod-configmaps-5fa3c125-03f5-4892-b002-0044e9ee561e no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:03:40.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8445" for this suite.

• [SLOW TEST:12.843 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1663,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:03:40.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  7 16:03:42.856: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jul  7 16:03:48.426: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jul  7 16:03:52.894: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Jul  7 16:03:53.150: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-3198 /apis/apps/v1/namespaces/deployment-3198/deployments/test-cleanup-deployment d44ce581-4252-45a8-a6ee-872b2f1dc4b9 937377 1 2020-07-07 16:03:52 +0000 UTC   map[name:cleanup-pod] map[] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002e63b68  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},}

Jul  7 16:03:53.251: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6  deployment-3198 /apis/apps/v1/namespaces/deployment-3198/replicasets/test-cleanup-deployment-55ffc6b7b6 504ff657-1774-41ae-a6f7-cc6e266486dd 937380 1 2020-07-07 16:03:53 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment d44ce581-4252-45a8-a6ee-872b2f1dc4b9 0xc002e63f97 0xc002e63f98}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0022ae008  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jul  7 16:03:53.251: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Jul  7 16:03:53.251: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller  deployment-3198 /apis/apps/v1/namespaces/deployment-3198/replicasets/test-cleanup-controller 5d3cdbc7-67ec-4d51-b2a9-81f9eec3d883 937379 1 2020-07-07 16:03:42 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment d44ce581-4252-45a8-a6ee-872b2f1dc4b9 0xc002e63ec7 0xc002e63ec8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002e63f28  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jul  7 16:03:53.420: INFO: Pod "test-cleanup-controller-84bgd" is available:
&Pod{ObjectMeta:{test-cleanup-controller-84bgd test-cleanup-controller- deployment-3198 /api/v1/namespaces/deployment-3198/pods/test-cleanup-controller-84bgd 8a291436-6508-4b36-9df0-a0b142bde869 937374 0 2020-07-07 16:03:42 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 5d3cdbc7-67ec-4d51-b2a9-81f9eec3d883 0xc0022ae7d7 0xc0022ae7d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kssbg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kssbg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kssbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:03:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:03:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:03:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:03:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.13,StartTime:2020-07-07 16:03:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-07 16:03:50 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4b83bc17d613518296092d6354425345aa905eb92ef35920bc81ef6c328d938e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.13,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  7 16:03:53.420: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-lzqx7" is not available:
&Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-lzqx7 test-cleanup-deployment-55ffc6b7b6- deployment-3198 /api/v1/namespaces/deployment-3198/pods/test-cleanup-deployment-55ffc6b7b6-lzqx7 1b5fb116-8342-47b2-bcc6-de13b483e26c 937386 0 2020-07-07 16:03:53 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 504ff657-1774-41ae-a6f7-cc6e266486dd 0xc0022ae977 0xc0022ae978}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kssbg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kssbg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kssbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:03:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:03:53.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3198" for this suite.

• [SLOW TEST:12.622 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":103,"skipped":1681,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:03:53.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test env composition
Jul  7 16:03:53.859: INFO: Waiting up to 5m0s for pod "var-expansion-ced3bd5c-afe0-4bef-b53f-459895ddc750" in namespace "var-expansion-9357" to be "success or failure"
Jul  7 16:03:53.875: INFO: Pod "var-expansion-ced3bd5c-afe0-4bef-b53f-459895ddc750": Phase="Pending", Reason="", readiness=false. Elapsed: 15.766853ms
Jul  7 16:03:56.335: INFO: Pod "var-expansion-ced3bd5c-afe0-4bef-b53f-459895ddc750": Phase="Pending", Reason="", readiness=false. Elapsed: 2.475778073s
Jul  7 16:03:58.534: INFO: Pod "var-expansion-ced3bd5c-afe0-4bef-b53f-459895ddc750": Phase="Pending", Reason="", readiness=false. Elapsed: 4.674757058s
Jul  7 16:04:00.648: INFO: Pod "var-expansion-ced3bd5c-afe0-4bef-b53f-459895ddc750": Phase="Pending", Reason="", readiness=false. Elapsed: 6.788810686s
Jul  7 16:04:02.934: INFO: Pod "var-expansion-ced3bd5c-afe0-4bef-b53f-459895ddc750": Phase="Pending", Reason="", readiness=false. Elapsed: 9.075563134s
Jul  7 16:04:05.591: INFO: Pod "var-expansion-ced3bd5c-afe0-4bef-b53f-459895ddc750": Phase="Pending", Reason="", readiness=false. Elapsed: 11.732038088s
Jul  7 16:04:08.116: INFO: Pod "var-expansion-ced3bd5c-afe0-4bef-b53f-459895ddc750": Phase="Running", Reason="", readiness=true. Elapsed: 14.257323953s
Jul  7 16:04:10.935: INFO: Pod "var-expansion-ced3bd5c-afe0-4bef-b53f-459895ddc750": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.076083395s
STEP: Saw pod success
Jul  7 16:04:10.935: INFO: Pod "var-expansion-ced3bd5c-afe0-4bef-b53f-459895ddc750" satisfied condition "success or failure"
Jul  7 16:04:10.938: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-ced3bd5c-afe0-4bef-b53f-459895ddc750 container dapi-container: 
STEP: delete the pod
Jul  7 16:04:12.504: INFO: Waiting for pod var-expansion-ced3bd5c-afe0-4bef-b53f-459895ddc750 to disappear
Jul  7 16:04:12.581: INFO: Pod var-expansion-ced3bd5c-afe0-4bef-b53f-459895ddc750 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:04:12.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9357" for this suite.

• [SLOW TEST:18.970 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":104,"skipped":1705,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:04:12.589: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  7 16:04:13.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Jul  7 16:04:16.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2043 create -f -'
Jul  7 16:04:49.889: INFO: stderr: ""
Jul  7 16:04:49.889: INFO: stdout: "e2e-test-crd-publish-openapi-8204-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Jul  7 16:04:49.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2043 delete e2e-test-crd-publish-openapi-8204-crds test-foo'
Jul  7 16:04:50.791: INFO: stderr: ""
Jul  7 16:04:50.791: INFO: stdout: "e2e-test-crd-publish-openapi-8204-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Jul  7 16:04:50.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2043 apply -f -'
Jul  7 16:04:51.637: INFO: stderr: ""
Jul  7 16:04:51.637: INFO: stdout: "e2e-test-crd-publish-openapi-8204-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Jul  7 16:04:51.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2043 delete e2e-test-crd-publish-openapi-8204-crds test-foo'
Jul  7 16:04:52.012: INFO: stderr: ""
Jul  7 16:04:52.012: INFO: stdout: "e2e-test-crd-publish-openapi-8204-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Jul  7 16:04:52.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2043 create -f -'
Jul  7 16:04:53.238: INFO: rc: 1
Jul  7 16:04:53.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2043 apply -f -'
Jul  7 16:04:53.874: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Jul  7 16:04:53.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2043 create -f -'
Jul  7 16:04:54.968: INFO: rc: 1
Jul  7 16:04:54.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2043 apply -f -'
Jul  7 16:04:55.904: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Jul  7 16:04:55.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8204-crds'
Jul  7 16:04:57.091: INFO: stderr: ""
Jul  7 16:04:57.091: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8204-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Jul  7 16:04:57.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8204-crds.metadata'
Jul  7 16:04:58.951: INFO: stderr: ""
Jul  7 16:04:58.951: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8204-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Jul  7 16:04:58.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8204-crds.spec'
Jul  7 16:04:59.748: INFO: stderr: ""
Jul  7 16:04:59.748: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8204-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Jul  7 16:04:59.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8204-crds.spec.bars'
Jul  7 16:05:01.115: INFO: stderr: ""
Jul  7 16:05:01.115: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8204-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Jul  7 16:05:01.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8204-crds.spec.bars2'
Jul  7 16:05:01.946: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:05:05.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2043" for this suite.

• [SLOW TEST:53.689 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":105,"skipped":1713,"failed":0}
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:05:06.278: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-8e0f1af3-b6c4-4a3f-872d-2726d848785f
STEP: Creating secret with name s-test-opt-upd-71711d0b-e7ef-4fff-97d6-2f78f8f2fea5
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-8e0f1af3-b6c4-4a3f-872d-2726d848785f
STEP: Updating secret s-test-opt-upd-71711d0b-e7ef-4fff-97d6-2f78f8f2fea5
STEP: Creating secret with name s-test-opt-create-5399253b-0dd8-48df-8c88-6c6df7892ca7
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:06:58.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4256" for this suite.

• [SLOW TEST:111.793 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":1713,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:06:58.071: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jul  7 16:06:58.470: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul  7 16:06:58.549: INFO: Waiting for terminating namespaces to be deleted...
Jul  7 16:06:58.551: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Jul  7 16:06:58.556: INFO: kindnet-gnxwn from kube-system started at 2020-07-04 07:51:00 +0000 UTC (1 container statuses recorded)
Jul  7 16:06:58.556: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul  7 16:06:58.556: INFO: kube-proxy-8sp85 from kube-system started at 2020-07-04 07:51:00 +0000 UTC (1 container statuses recorded)
Jul  7 16:06:58.556: INFO: 	Container kube-proxy ready: true, restart count 0
Jul  7 16:06:58.556: INFO: pod-secrets-3efc2538-1ec0-4b3e-bbf7-a3af8224557f from secrets-4256 started at 2020-07-07 16:05:11 +0000 UTC (3 container statuses recorded)
Jul  7 16:06:58.556: INFO: 	Container creates-volume-test ready: true, restart count 0
Jul  7 16:06:58.556: INFO: 	Container dels-volume-test ready: true, restart count 0
Jul  7 16:06:58.556: INFO: 	Container upds-volume-test ready: true, restart count 0
Jul  7 16:06:58.556: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Jul  7 16:06:58.572: INFO: kindnet-qg8qr from kube-system started at 2020-07-04 07:51:01 +0000 UTC (1 container statuses recorded)
Jul  7 16:06:58.572: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul  7 16:06:58.572: INFO: kube-proxy-b2ncl from kube-system started at 2020-07-04 07:51:01 +0000 UTC (1 container statuses recorded)
Jul  7 16:06:58.572: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-8aef8584-4803-49be-b7bf-019863d8a923 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-8aef8584-4803-49be-b7bf-019863d8a923 off the node jerma-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-8aef8584-4803-49be-b7bf-019863d8a923
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:07:14.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3704" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:16.705 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":278,"completed":107,"skipped":1756,"failed":0}
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:07:14.776: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-1bdc7d8b-a9b3-4fd7-ae96-1ff2142d9496
STEP: Creating a pod to test consume configMaps
Jul  7 16:07:15.120: INFO: Waiting up to 5m0s for pod "pod-configmaps-776df0f3-007b-4a0c-8b19-ae71aefc74ad" in namespace "configmap-5504" to be "success or failure"
Jul  7 16:07:15.255: INFO: Pod "pod-configmaps-776df0f3-007b-4a0c-8b19-ae71aefc74ad": Phase="Pending", Reason="", readiness=false. Elapsed: 134.963994ms
Jul  7 16:07:17.598: INFO: Pod "pod-configmaps-776df0f3-007b-4a0c-8b19-ae71aefc74ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.478121374s
Jul  7 16:07:20.063: INFO: Pod "pod-configmaps-776df0f3-007b-4a0c-8b19-ae71aefc74ad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.942999514s
Jul  7 16:07:22.638: INFO: Pod "pod-configmaps-776df0f3-007b-4a0c-8b19-ae71aefc74ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.518361064s
STEP: Saw pod success
Jul  7 16:07:22.638: INFO: Pod "pod-configmaps-776df0f3-007b-4a0c-8b19-ae71aefc74ad" satisfied condition "success or failure"
Jul  7 16:07:22.641: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-776df0f3-007b-4a0c-8b19-ae71aefc74ad container configmap-volume-test: 
STEP: delete the pod
Jul  7 16:07:22.960: INFO: Waiting for pod pod-configmaps-776df0f3-007b-4a0c-8b19-ae71aefc74ad to disappear
Jul  7 16:07:22.992: INFO: Pod pod-configmaps-776df0f3-007b-4a0c-8b19-ae71aefc74ad no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:07:22.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5504" for this suite.

• [SLOW TEST:8.224 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":108,"skipped":1761,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:07:23.002: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-32c68c48-0e00-40d2-a077-044c4a107aa6
STEP: Creating a pod to test consume configMaps
Jul  7 16:07:23.867: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-19d7738f-d0b8-4880-9b6e-008de55c4b02" in namespace "projected-6853" to be "success or failure"
Jul  7 16:07:23.961: INFO: Pod "pod-projected-configmaps-19d7738f-d0b8-4880-9b6e-008de55c4b02": Phase="Pending", Reason="", readiness=false. Elapsed: 94.525391ms
Jul  7 16:07:26.035: INFO: Pod "pod-projected-configmaps-19d7738f-d0b8-4880-9b6e-008de55c4b02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.168232284s
Jul  7 16:07:28.037: INFO: Pod "pod-projected-configmaps-19d7738f-d0b8-4880-9b6e-008de55c4b02": Phase="Pending", Reason="", readiness=false. Elapsed: 4.170585222s
Jul  7 16:07:30.418: INFO: Pod "pod-projected-configmaps-19d7738f-d0b8-4880-9b6e-008de55c4b02": Phase="Pending", Reason="", readiness=false. Elapsed: 6.551020617s
Jul  7 16:07:32.616: INFO: Pod "pod-projected-configmaps-19d7738f-d0b8-4880-9b6e-008de55c4b02": Phase="Pending", Reason="", readiness=false. Elapsed: 8.749185366s
Jul  7 16:07:34.740: INFO: Pod "pod-projected-configmaps-19d7738f-d0b8-4880-9b6e-008de55c4b02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.873008365s
STEP: Saw pod success
Jul  7 16:07:34.740: INFO: Pod "pod-projected-configmaps-19d7738f-d0b8-4880-9b6e-008de55c4b02" satisfied condition "success or failure"
Jul  7 16:07:34.742: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-19d7738f-d0b8-4880-9b6e-008de55c4b02 container projected-configmap-volume-test: 
STEP: delete the pod
Jul  7 16:07:35.365: INFO: Waiting for pod pod-projected-configmaps-19d7738f-d0b8-4880-9b6e-008de55c4b02 to disappear
Jul  7 16:07:36.082: INFO: Pod pod-projected-configmaps-19d7738f-d0b8-4880-9b6e-008de55c4b02 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:07:36.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6853" for this suite.

• [SLOW TEST:13.124 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":109,"skipped":1792,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:07:36.125: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jul  7 16:07:36.838: INFO: Waiting up to 5m0s for pod "pod-cfd41664-765e-47bc-810d-36aff7685105" in namespace "emptydir-8006" to be "success or failure"
Jul  7 16:07:37.172: INFO: Pod "pod-cfd41664-765e-47bc-810d-36aff7685105": Phase="Pending", Reason="", readiness=false. Elapsed: 333.421478ms
Jul  7 16:07:39.197: INFO: Pod "pod-cfd41664-765e-47bc-810d-36aff7685105": Phase="Pending", Reason="", readiness=false. Elapsed: 2.358419008s
Jul  7 16:07:41.275: INFO: Pod "pod-cfd41664-765e-47bc-810d-36aff7685105": Phase="Running", Reason="", readiness=true. Elapsed: 4.436483768s
Jul  7 16:07:43.278: INFO: Pod "pod-cfd41664-765e-47bc-810d-36aff7685105": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.439587536s
STEP: Saw pod success
Jul  7 16:07:43.278: INFO: Pod "pod-cfd41664-765e-47bc-810d-36aff7685105" satisfied condition "success or failure"
Jul  7 16:07:43.280: INFO: Trying to get logs from node jerma-worker pod pod-cfd41664-765e-47bc-810d-36aff7685105 container test-container: 
STEP: delete the pod
Jul  7 16:07:43.430: INFO: Waiting for pod pod-cfd41664-765e-47bc-810d-36aff7685105 to disappear
Jul  7 16:07:43.497: INFO: Pod pod-cfd41664-765e-47bc-810d-36aff7685105 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:07:43.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8006" for this suite.

• [SLOW TEST:7.379 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":110,"skipped":1805,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:07:43.505: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-c3f96d2b-047d-46ca-b4eb-9e9a3a8f2299
STEP: Creating a pod to test consume secrets
Jul  7 16:07:45.397: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6f594c04-95f2-4840-a26b-2eb980eb9dd9" in namespace "projected-6187" to be "success or failure"
Jul  7 16:07:45.440: INFO: Pod "pod-projected-secrets-6f594c04-95f2-4840-a26b-2eb980eb9dd9": Phase="Pending", Reason="", readiness=false. Elapsed: 42.841152ms
Jul  7 16:07:47.445: INFO: Pod "pod-projected-secrets-6f594c04-95f2-4840-a26b-2eb980eb9dd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047564327s
Jul  7 16:07:50.053: INFO: Pod "pod-projected-secrets-6f594c04-95f2-4840-a26b-2eb980eb9dd9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.656176758s
Jul  7 16:07:52.055: INFO: Pod "pod-projected-secrets-6f594c04-95f2-4840-a26b-2eb980eb9dd9": Phase="Running", Reason="", readiness=true. Elapsed: 6.658278879s
Jul  7 16:07:54.058: INFO: Pod "pod-projected-secrets-6f594c04-95f2-4840-a26b-2eb980eb9dd9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.66148847s
STEP: Saw pod success
Jul  7 16:07:54.059: INFO: Pod "pod-projected-secrets-6f594c04-95f2-4840-a26b-2eb980eb9dd9" satisfied condition "success or failure"
Jul  7 16:07:54.062: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-6f594c04-95f2-4840-a26b-2eb980eb9dd9 container projected-secret-volume-test: 
STEP: delete the pod
Jul  7 16:07:54.100: INFO: Waiting for pod pod-projected-secrets-6f594c04-95f2-4840-a26b-2eb980eb9dd9 to disappear
Jul  7 16:07:54.159: INFO: Pod pod-projected-secrets-6f594c04-95f2-4840-a26b-2eb980eb9dd9 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:07:54.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6187" for this suite.

• [SLOW TEST:10.660 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":111,"skipped":1814,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:07:54.165: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service nodeport-test with type=NodePort in namespace services-1405
STEP: creating replication controller nodeport-test in namespace services-1405
I0707 16:07:54.405473       6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-1405, replica count: 2
I0707 16:07:57.455874       6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0707 16:08:00.456070       6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0707 16:08:03.456238       6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0707 16:08:06.456487       6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0707 16:08:09.456748       6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul  7 16:08:09.456: INFO: Creating new exec pod
Jul  7 16:08:21.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1405 execpod9t4sh -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Jul  7 16:08:21.862: INFO: stderr: "I0707 16:08:21.780763    1251 log.go:172] (0xc0009a2630) (0xc000619ae0) Create stream\nI0707 16:08:21.780812    1251 log.go:172] (0xc0009a2630) (0xc000619ae0) Stream added, broadcasting: 1\nI0707 16:08:21.783173    1251 log.go:172] (0xc0009a2630) Reply frame received for 1\nI0707 16:08:21.783210    1251 log.go:172] (0xc0009a2630) (0xc000619cc0) Create stream\nI0707 16:08:21.783218    1251 log.go:172] (0xc0009a2630) (0xc000619cc0) Stream added, broadcasting: 3\nI0707 16:08:21.783898    1251 log.go:172] (0xc0009a2630) Reply frame received for 3\nI0707 16:08:21.783936    1251 log.go:172] (0xc0009a2630) (0xc000619d60) Create stream\nI0707 16:08:21.783951    1251 log.go:172] (0xc0009a2630) (0xc000619d60) Stream added, broadcasting: 5\nI0707 16:08:21.784636    1251 log.go:172] (0xc0009a2630) Reply frame received for 5\nI0707 16:08:21.851039    1251 log.go:172] (0xc0009a2630) Data frame received for 5\nI0707 16:08:21.851057    1251 log.go:172] (0xc000619d60) (5) Data frame handling\nI0707 16:08:21.851067    1251 log.go:172] (0xc000619d60) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0707 16:08:21.851610    1251 log.go:172] (0xc0009a2630) Data frame received for 5\nI0707 16:08:21.851626    1251 log.go:172] (0xc000619d60) (5) Data frame handling\nI0707 16:08:21.851640    1251 log.go:172] (0xc000619d60) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0707 16:08:21.852062    1251 log.go:172] (0xc0009a2630) Data frame received for 3\nI0707 16:08:21.852077    1251 log.go:172] (0xc000619cc0) (3) Data frame handling\nI0707 16:08:21.852125    1251 log.go:172] (0xc0009a2630) Data frame received for 5\nI0707 16:08:21.852138    1251 log.go:172] (0xc000619d60) (5) Data frame handling\nI0707 16:08:21.857439    1251 log.go:172] (0xc0009a2630) Data frame received for 1\nI0707 16:08:21.857457    1251 log.go:172] (0xc000619ae0) (1) Data frame handling\nI0707 16:08:21.857473    1251 log.go:172] (0xc000619ae0) (1) Data frame sent\nI0707 16:08:21.857652    1251 log.go:172] (0xc0009a2630) (0xc000619ae0) Stream removed, broadcasting: 1\nI0707 16:08:21.857676    1251 log.go:172] (0xc0009a2630) Go away received\nI0707 16:08:21.857998    1251 log.go:172] (0xc0009a2630) (0xc000619ae0) Stream removed, broadcasting: 1\nI0707 16:08:21.858014    1251 log.go:172] (0xc0009a2630) (0xc000619cc0) Stream removed, broadcasting: 3\nI0707 16:08:21.858021    1251 log.go:172] (0xc0009a2630) (0xc000619d60) Stream removed, broadcasting: 5\n"
Jul  7 16:08:21.862: INFO: stdout: ""
Jul  7 16:08:21.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1405 execpod9t4sh -- /bin/sh -x -c nc -zv -t -w 2 10.110.136.180 80'
Jul  7 16:08:22.089: INFO: stderr: "I0707 16:08:22.008356    1273 log.go:172] (0xc000a214a0) (0xc000a526e0) Create stream\nI0707 16:08:22.008398    1273 log.go:172] (0xc000a214a0) (0xc000a526e0) Stream added, broadcasting: 1\nI0707 16:08:22.012318    1273 log.go:172] (0xc000a214a0) Reply frame received for 1\nI0707 16:08:22.012368    1273 log.go:172] (0xc000a214a0) (0xc0005aa640) Create stream\nI0707 16:08:22.012378    1273 log.go:172] (0xc000a214a0) (0xc0005aa640) Stream added, broadcasting: 3\nI0707 16:08:22.013340    1273 log.go:172] (0xc000a214a0) Reply frame received for 3\nI0707 16:08:22.013371    1273 log.go:172] (0xc000a214a0) (0xc000795400) Create stream\nI0707 16:08:22.013385    1273 log.go:172] (0xc000a214a0) (0xc000795400) Stream added, broadcasting: 5\nI0707 16:08:22.014185    1273 log.go:172] (0xc000a214a0) Reply frame received for 5\nI0707 16:08:22.073504    1273 log.go:172] (0xc000a214a0) Data frame received for 5\nI0707 16:08:22.073539    1273 log.go:172] (0xc000795400) (5) Data frame handling\nI0707 16:08:22.073560    1273 log.go:172] (0xc000795400) (5) Data frame sent\n+ nc -zv -t -w 2 10.110.136.180 80\nI0707 16:08:22.077950    1273 log.go:172] (0xc000a214a0) Data frame received for 5\nI0707 16:08:22.077992    1273 log.go:172] (0xc000795400) (5) Data frame handling\nI0707 16:08:22.078020    1273 log.go:172] (0xc000795400) (5) Data frame sent\nConnection to 10.110.136.180 80 port [tcp/http] succeeded!\nI0707 16:08:22.078120    1273 log.go:172] (0xc000a214a0) Data frame received for 3\nI0707 16:08:22.078137    1273 log.go:172] (0xc0005aa640) (3) Data frame handling\nI0707 16:08:22.078149    1273 log.go:172] (0xc000a214a0) Data frame received for 5\nI0707 16:08:22.078159    1273 log.go:172] (0xc000795400) (5) Data frame handling\nI0707 16:08:22.079734    1273 log.go:172] (0xc000a214a0) Data frame received for 1\nI0707 16:08:22.079750    1273 log.go:172] (0xc000a526e0) (1) Data frame handling\nI0707 16:08:22.079757    1273 log.go:172] (0xc000a526e0) (1) Data frame sent\nI0707 16:08:22.079764    1273 log.go:172] (0xc000a214a0) (0xc000a526e0) Stream removed, broadcasting: 1\nI0707 16:08:22.079772    1273 log.go:172] (0xc000a214a0) Go away received\nI0707 16:08:22.080107    1273 log.go:172] (0xc000a214a0) (0xc000a526e0) Stream removed, broadcasting: 1\nI0707 16:08:22.080122    1273 log.go:172] (0xc000a214a0) (0xc0005aa640) Stream removed, broadcasting: 3\nI0707 16:08:22.080130    1273 log.go:172] (0xc000a214a0) (0xc000795400) Stream removed, broadcasting: 5\n"
Jul  7 16:08:22.089: INFO: stdout: ""
Jul  7 16:08:22.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1405 execpod9t4sh -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 32765'
Jul  7 16:08:22.292: INFO: stderr: "I0707 16:08:22.220121    1292 log.go:172] (0xc000bba000) (0xc000a4e000) Create stream\nI0707 16:08:22.220187    1292 log.go:172] (0xc000bba000) (0xc000a4e000) Stream added, broadcasting: 1\nI0707 16:08:22.222118    1292 log.go:172] (0xc000bba000) Reply frame received for 1\nI0707 16:08:22.222163    1292 log.go:172] (0xc000bba000) (0xc0001f8000) Create stream\nI0707 16:08:22.222179    1292 log.go:172] (0xc000bba000) (0xc0001f8000) Stream added, broadcasting: 3\nI0707 16:08:22.223017    1292 log.go:172] (0xc000bba000) Reply frame received for 3\nI0707 16:08:22.223040    1292 log.go:172] (0xc000bba000) (0xc00061bc20) Create stream\nI0707 16:08:22.223057    1292 log.go:172] (0xc000bba000) (0xc00061bc20) Stream added, broadcasting: 5\nI0707 16:08:22.224162    1292 log.go:172] (0xc000bba000) Reply frame received for 5\nI0707 16:08:22.284516    1292 log.go:172] (0xc000bba000) Data frame received for 3\nI0707 16:08:22.284579    1292 log.go:172] (0xc0001f8000) (3) Data frame handling\nI0707 16:08:22.284664    1292 log.go:172] (0xc000bba000) Data frame received for 5\nI0707 16:08:22.284693    1292 log.go:172] (0xc00061bc20) (5) Data frame handling\nI0707 16:08:22.284720    1292 log.go:172] (0xc00061bc20) (5) Data frame sent\nI0707 16:08:22.284752    1292 log.go:172] (0xc000bba000) Data frame received for 5\nI0707 16:08:22.284772    1292 log.go:172] (0xc00061bc20) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 32765\nConnection to 172.17.0.10 32765 port [tcp/32765] succeeded!\nI0707 16:08:22.286310    1292 log.go:172] (0xc000bba000) Data frame received for 1\nI0707 16:08:22.286351    1292 log.go:172] (0xc000a4e000) (1) Data frame handling\nI0707 16:08:22.286373    1292 log.go:172] (0xc000a4e000) (1) Data frame sent\nI0707 16:08:22.286394    1292 log.go:172] (0xc000bba000) (0xc000a4e000) Stream removed, broadcasting: 1\nI0707 16:08:22.286429    1292 log.go:172] (0xc000bba000) Go away received\nI0707 16:08:22.286967    1292 log.go:172] (0xc000bba000) (0xc000a4e000) Stream removed, broadcasting: 1\nI0707 16:08:22.286989    1292 log.go:172] (0xc000bba000) (0xc0001f8000) Stream removed, broadcasting: 3\nI0707 16:08:22.287000    1292 log.go:172] (0xc000bba000) (0xc00061bc20) Stream removed, broadcasting: 5\n"
Jul  7 16:08:22.292: INFO: stdout: ""
Jul  7 16:08:22.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1405 execpod9t4sh -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 32765'
Jul  7 16:08:22.540: INFO: stderr: "I0707 16:08:22.463843    1314 log.go:172] (0xc0000f4a50) (0xc000a2c000) Create stream\nI0707 16:08:22.463901    1314 log.go:172] (0xc0000f4a50) (0xc000a2c000) Stream added, broadcasting: 1\nI0707 16:08:22.466502    1314 log.go:172] (0xc0000f4a50) Reply frame received for 1\nI0707 16:08:22.466557    1314 log.go:172] (0xc0000f4a50) (0xc00065ba40) Create stream\nI0707 16:08:22.466574    1314 log.go:172] (0xc0000f4a50) (0xc00065ba40) Stream added, broadcasting: 3\nI0707 16:08:22.467382    1314 log.go:172] (0xc0000f4a50) Reply frame received for 3\nI0707 16:08:22.467419    1314 log.go:172] (0xc0000f4a50) (0xc00065bc20) Create stream\nI0707 16:08:22.467429    1314 log.go:172] (0xc0000f4a50) (0xc00065bc20) Stream added, broadcasting: 5\nI0707 16:08:22.468134    1314 log.go:172] (0xc0000f4a50) Reply frame received for 5\nI0707 16:08:22.533340    1314 log.go:172] (0xc0000f4a50) Data frame received for 5\nI0707 16:08:22.533362    1314 log.go:172] (0xc00065bc20) (5) Data frame handling\nI0707 16:08:22.533372    1314 log.go:172] (0xc00065bc20) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.8 32765\nI0707 16:08:22.534691    1314 log.go:172] (0xc0000f4a50) Data frame received for 5\nI0707 16:08:22.534709    1314 log.go:172] (0xc00065bc20) (5) Data frame handling\nConnection to 172.17.0.8 32765 port [tcp/32765] succeeded!\nI0707 16:08:22.534785    1314 log.go:172] (0xc00065bc20) (5) Data frame sent\nI0707 16:08:22.534801    1314 log.go:172] (0xc0000f4a50) Data frame received for 5\nI0707 16:08:22.534811    1314 log.go:172] (0xc00065bc20) (5) Data frame handling\nI0707 16:08:22.534904    1314 log.go:172] (0xc0000f4a50) Data frame received for 3\nI0707 16:08:22.534930    1314 log.go:172] (0xc00065ba40) (3) Data frame handling\nI0707 16:08:22.535831    1314 log.go:172] (0xc0000f4a50) Data frame received for 1\nI0707 16:08:22.535845    1314 log.go:172] (0xc000a2c000) (1) Data frame handling\nI0707 16:08:22.535861    1314 log.go:172] (0xc000a2c000) (1) Data frame sent\nI0707 16:08:22.535893    1314 log.go:172] (0xc0000f4a50) (0xc000a2c000) Stream removed, broadcasting: 1\nI0707 16:08:22.535974    1314 log.go:172] (0xc0000f4a50) Go away received\nI0707 16:08:22.536163    1314 log.go:172] (0xc0000f4a50) (0xc000a2c000) Stream removed, broadcasting: 1\nI0707 16:08:22.536174    1314 log.go:172] (0xc0000f4a50) (0xc00065ba40) Stream removed, broadcasting: 3\nI0707 16:08:22.536180    1314 log.go:172] (0xc0000f4a50) (0xc00065bc20) Stream removed, broadcasting: 5\n"
Jul  7 16:08:22.540: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:08:22.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1405" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:28.402 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":112,"skipped":1842,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:08:22.568: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-4a842d6a-f482-4579-9b32-321e0b166a4c
STEP: Creating a pod to test consume configMaps
Jul  7 16:08:22.719: INFO: Waiting up to 5m0s for pod "pod-configmaps-60768cd0-c063-44f0-9a53-10aadd1a8810" in namespace "configmap-2087" to be "success or failure"
Jul  7 16:08:22.810: INFO: Pod "pod-configmaps-60768cd0-c063-44f0-9a53-10aadd1a8810": Phase="Pending", Reason="", readiness=false. Elapsed: 91.198792ms
Jul  7 16:08:24.830: INFO: Pod "pod-configmaps-60768cd0-c063-44f0-9a53-10aadd1a8810": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111826559s
Jul  7 16:08:27.167: INFO: Pod "pod-configmaps-60768cd0-c063-44f0-9a53-10aadd1a8810": Phase="Pending", Reason="", readiness=false. Elapsed: 4.448195298s
Jul  7 16:08:29.170: INFO: Pod "pod-configmaps-60768cd0-c063-44f0-9a53-10aadd1a8810": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.450968117s
STEP: Saw pod success
Jul  7 16:08:29.170: INFO: Pod "pod-configmaps-60768cd0-c063-44f0-9a53-10aadd1a8810" satisfied condition "success or failure"
Jul  7 16:08:29.186: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-60768cd0-c063-44f0-9a53-10aadd1a8810 container configmap-volume-test: 
STEP: delete the pod
Jul  7 16:08:29.336: INFO: Waiting for pod pod-configmaps-60768cd0-c063-44f0-9a53-10aadd1a8810 to disappear
Jul  7 16:08:29.342: INFO: Pod pod-configmaps-60768cd0-c063-44f0-9a53-10aadd1a8810 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:08:29.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2087" for this suite.

• [SLOW TEST:6.844 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":113,"skipped":1873,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:08:29.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-4978/configmap-test-2e5cbb7c-60f0-4fd2-95fe-01910df0db2e
STEP: Creating a pod to test consume configMaps
Jul  7 16:08:30.588: INFO: Waiting up to 5m0s for pod "pod-configmaps-7946b03f-49c4-4e7e-a4e2-dedbd66c525e" in namespace "configmap-4978" to be "success or failure"
Jul  7 16:08:30.678: INFO: Pod "pod-configmaps-7946b03f-49c4-4e7e-a4e2-dedbd66c525e": Phase="Pending", Reason="", readiness=false. Elapsed: 90.290239ms
Jul  7 16:08:32.783: INFO: Pod "pod-configmaps-7946b03f-49c4-4e7e-a4e2-dedbd66c525e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195146952s
Jul  7 16:08:34.786: INFO: Pod "pod-configmaps-7946b03f-49c4-4e7e-a4e2-dedbd66c525e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.197456868s
Jul  7 16:08:36.879: INFO: Pod "pod-configmaps-7946b03f-49c4-4e7e-a4e2-dedbd66c525e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.290788251s
STEP: Saw pod success
Jul  7 16:08:36.879: INFO: Pod "pod-configmaps-7946b03f-49c4-4e7e-a4e2-dedbd66c525e" satisfied condition "success or failure"
Jul  7 16:08:36.881: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-7946b03f-49c4-4e7e-a4e2-dedbd66c525e container env-test: 
STEP: delete the pod
Jul  7 16:08:36.972: INFO: Waiting for pod pod-configmaps-7946b03f-49c4-4e7e-a4e2-dedbd66c525e to disappear
Jul  7 16:08:36.976: INFO: Pod pod-configmaps-7946b03f-49c4-4e7e-a4e2-dedbd66c525e no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:08:36.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4978" for this suite.

• [SLOW TEST:7.649 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":114,"skipped":1958,"failed":0}
SSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:08:37.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test hostPath mode
Jul  7 16:08:37.314: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-8278" to be "success or failure"
Jul  7 16:08:37.405: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 91.249177ms
Jul  7 16:08:39.460: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1453453s
Jul  7 16:08:41.720: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.405767319s
Jul  7 16:08:44.067: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.752813594s
Jul  7 16:08:46.376: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 9.061878546s
Jul  7 16:08:48.528: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 11.213802677s
Jul  7 16:08:50.559: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 13.244418245s
Jul  7 16:08:52.563: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 15.248392801s
Jul  7 16:08:54.719: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 17.404453855s
Jul  7 16:08:56.721: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.407233694s
STEP: Saw pod success
Jul  7 16:08:56.721: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jul  7 16:08:56.723: INFO: Trying to get logs from node jerma-worker pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jul  7 16:08:57.146: INFO: Waiting for pod pod-host-path-test to disappear
Jul  7 16:08:57.226: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:08:57.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-8278" for this suite.

• [SLOW TEST:20.171 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":115,"skipped":1962,"failed":0}
SSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:08:57.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-3886, will wait for the garbage collector to delete the pods
Jul  7 16:09:07.729: INFO: Deleting Job.batch foo took: 90.785155ms
Jul  7 16:09:08.429: INFO: Terminating Job.batch foo pods took: 700.203377ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:09:48.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-3886" for this suite.

• [SLOW TEST:52.199 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":116,"skipped":1966,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:09:49.433: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jul  7 16:10:18.317: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  7 16:10:18.558: INFO: Pod pod-with-prestop-exec-hook still exists
Jul  7 16:10:20.559: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  7 16:10:20.732: INFO: Pod pod-with-prestop-exec-hook still exists
Jul  7 16:10:22.559: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  7 16:10:22.690: INFO: Pod pod-with-prestop-exec-hook still exists
Jul  7 16:10:24.559: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  7 16:10:24.588: INFO: Pod pod-with-prestop-exec-hook still exists
Jul  7 16:10:26.559: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  7 16:10:26.785: INFO: Pod pod-with-prestop-exec-hook still exists
Jul  7 16:10:28.559: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  7 16:10:28.566: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:10:28.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4149" for this suite.

• [SLOW TEST:39.451 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":117,"skipped":1996,"failed":0}
SS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:10:28.885: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
STEP: reading a file in the container
Jul  7 16:10:37.973: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-452 pod-service-account-8a0fbbc6-5fa5-4185-ada2-581582065d97 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Jul  7 16:10:38.233: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-452 pod-service-account-8a0fbbc6-5fa5-4185-ada2-581582065d97 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Jul  7 16:10:38.440: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-452 pod-service-account-8a0fbbc6-5fa5-4185-ada2-581582065d97 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:10:38.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-452" for this suite.

• [SLOW TEST:9.766 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":278,"completed":118,"skipped":1998,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:10:38.652: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-b5582e4e-b1c5-424e-abb1-68a704f37e6f
STEP: Creating configMap with name cm-test-opt-upd-51c41992-b936-4077-b4d8-f6277775d519
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-b5582e4e-b1c5-424e-abb1-68a704f37e6f
STEP: Updating configmap cm-test-opt-upd-51c41992-b936-4077-b4d8-f6277775d519
STEP: Creating configMap with name cm-test-opt-create-673a8c71-4d32-4066-9f70-98015d0dae74
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:12:02.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2379" for this suite.

• [SLOW TEST:83.397 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":119,"skipped":2038,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:12:02.049: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:12:08.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1198" for this suite.

• [SLOW TEST:6.101 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":120,"skipped":2056,"failed":0}
S
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:12:08.150: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jul  7 16:12:14.076: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:12:14.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9574" for this suite.

• [SLOW TEST:5.996 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":121,"skipped":2057,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:12:14.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-7323
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Jul  7 16:12:14.315: INFO: Found 0 stateful pods, waiting for 3
Jul  7 16:12:24.355: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul  7 16:12:24.356: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul  7 16:12:24.356: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jul  7 16:12:34.321: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul  7 16:12:34.321: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul  7 16:12:34.321: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Jul  7 16:12:34.349: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jul  7 16:12:44.778: INFO: Updating stateful set ss2
Jul  7 16:12:45.267: INFO: Waiting for Pod statefulset-7323/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Jul  7 16:12:57.686: INFO: Found 2 stateful pods, waiting for 3
Jul  7 16:13:07.690: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul  7 16:13:07.690: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul  7 16:13:07.690: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jul  7 16:13:17.691: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul  7 16:13:17.691: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul  7 16:13:17.691: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jul  7 16:13:17.712: INFO: Updating stateful set ss2
Jul  7 16:13:17.839: INFO: Waiting for Pod statefulset-7323/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  7 16:13:27.864: INFO: Updating stateful set ss2
Jul  7 16:13:27.875: INFO: Waiting for StatefulSet statefulset-7323/ss2 to complete update
Jul  7 16:13:27.875: INFO: Waiting for Pod statefulset-7323/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul  7 16:13:37.954: INFO: Waiting for StatefulSet statefulset-7323/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jul  7 16:13:47.883: INFO: Deleting all statefulset in ns statefulset-7323
Jul  7 16:13:47.886: INFO: Scaling statefulset ss2 to 0
Jul  7 16:14:17.931: INFO: Waiting for statefulset status.replicas updated to 0
Jul  7 16:14:17.934: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:14:17.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7323" for this suite.

• [SLOW TEST:123.839 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":122,"skipped":2065,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:14:17.987: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:14:18.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-292" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":278,"completed":123,"skipped":2110,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:14:18.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  7 16:14:18.597: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  7 16:14:20.615: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729735258, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729735258, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729735258, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729735258, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 16:14:22.741: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729735258, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729735258, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729735258, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729735258, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 16:14:25.095: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729735258, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729735258, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729735258, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729735258, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 16:14:27.152: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729735258, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729735258, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729735258, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729735258, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  7 16:14:31.287: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:14:33.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4704" for this suite.
STEP: Destroying namespace "webhook-4704-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:16.947 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":124,"skipped":2135,"failed":0}
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:14:35.000: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:14:53.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8340" for this suite.

• [SLOW TEST:18.048 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":125,"skipped":2135,"failed":0}
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:14:53.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-3172fea0-c095-4cb2-950b-ad007f68d012
STEP: Creating a pod to test consume configMaps
Jul  7 16:14:53.158: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b3cc6877-7c7b-415b-a7fe-1d9fbb7d934e" in namespace "projected-5876" to be "success or failure"
Jul  7 16:14:53.161: INFO: Pod "pod-projected-configmaps-b3cc6877-7c7b-415b-a7fe-1d9fbb7d934e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.436948ms
Jul  7 16:14:55.166: INFO: Pod "pod-projected-configmaps-b3cc6877-7c7b-415b-a7fe-1d9fbb7d934e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007117878s
Jul  7 16:14:57.169: INFO: Pod "pod-projected-configmaps-b3cc6877-7c7b-415b-a7fe-1d9fbb7d934e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010892689s
Jul  7 16:14:59.192: INFO: Pod "pod-projected-configmaps-b3cc6877-7c7b-415b-a7fe-1d9fbb7d934e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033363856s
Jul  7 16:15:01.246: INFO: Pod "pod-projected-configmaps-b3cc6877-7c7b-415b-a7fe-1d9fbb7d934e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.087550556s
STEP: Saw pod success
Jul  7 16:15:01.246: INFO: Pod "pod-projected-configmaps-b3cc6877-7c7b-415b-a7fe-1d9fbb7d934e" satisfied condition "success or failure"
Jul  7 16:15:01.248: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-b3cc6877-7c7b-415b-a7fe-1d9fbb7d934e container projected-configmap-volume-test: 
STEP: delete the pod
Jul  7 16:15:01.708: INFO: Waiting for pod pod-projected-configmaps-b3cc6877-7c7b-415b-a7fe-1d9fbb7d934e to disappear
Jul  7 16:15:01.754: INFO: Pod pod-projected-configmaps-b3cc6877-7c7b-415b-a7fe-1d9fbb7d934e no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:15:01.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5876" for this suite.

• [SLOW TEST:8.850 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":2137,"failed":0}
SSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:15:01.898: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's command
Jul  7 16:15:02.152: INFO: Waiting up to 5m0s for pod "var-expansion-32ff64cd-2240-48a8-aa51-50213ea096b0" in namespace "var-expansion-8051" to be "success or failure"
Jul  7 16:15:02.155: INFO: Pod "var-expansion-32ff64cd-2240-48a8-aa51-50213ea096b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.706228ms
Jul  7 16:15:04.159: INFO: Pod "var-expansion-32ff64cd-2240-48a8-aa51-50213ea096b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006561263s
Jul  7 16:15:06.163: INFO: Pod "var-expansion-32ff64cd-2240-48a8-aa51-50213ea096b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01034467s
Jul  7 16:15:08.167: INFO: Pod "var-expansion-32ff64cd-2240-48a8-aa51-50213ea096b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014218519s
STEP: Saw pod success
Jul  7 16:15:08.167: INFO: Pod "var-expansion-32ff64cd-2240-48a8-aa51-50213ea096b0" satisfied condition "success or failure"
Jul  7 16:15:08.170: INFO: Trying to get logs from node jerma-worker pod var-expansion-32ff64cd-2240-48a8-aa51-50213ea096b0 container dapi-container: 
STEP: delete the pod
Jul  7 16:15:08.241: INFO: Waiting for pod var-expansion-32ff64cd-2240-48a8-aa51-50213ea096b0 to disappear
Jul  7 16:15:08.254: INFO: Pod var-expansion-32ff64cd-2240-48a8-aa51-50213ea096b0 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:15:08.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-8051" for this suite.

• [SLOW TEST:6.363 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":127,"skipped":2141,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:15:08.261: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Jul  7 16:15:08.401: INFO: Waiting up to 5m0s for pod "downward-api-c8189dbf-2576-4ad0-b69c-920a5fc49772" in namespace "downward-api-2393" to be "success or failure"
Jul  7 16:15:08.406: INFO: Pod "downward-api-c8189dbf-2576-4ad0-b69c-920a5fc49772": Phase="Pending", Reason="", readiness=false. Elapsed: 4.74104ms
Jul  7 16:15:10.415: INFO: Pod "downward-api-c8189dbf-2576-4ad0-b69c-920a5fc49772": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013479495s
Jul  7 16:15:12.478: INFO: Pod "downward-api-c8189dbf-2576-4ad0-b69c-920a5fc49772": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076393214s
Jul  7 16:15:14.778: INFO: Pod "downward-api-c8189dbf-2576-4ad0-b69c-920a5fc49772": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.376486393s
STEP: Saw pod success
Jul  7 16:15:14.778: INFO: Pod "downward-api-c8189dbf-2576-4ad0-b69c-920a5fc49772" satisfied condition "success or failure"
Jul  7 16:15:14.782: INFO: Trying to get logs from node jerma-worker2 pod downward-api-c8189dbf-2576-4ad0-b69c-920a5fc49772 container dapi-container: 
STEP: delete the pod
Jul  7 16:15:15.103: INFO: Waiting for pod downward-api-c8189dbf-2576-4ad0-b69c-920a5fc49772 to disappear
Jul  7 16:15:15.165: INFO: Pod downward-api-c8189dbf-2576-4ad0-b69c-920a5fc49772 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:15:15.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2393" for this suite.

• [SLOW TEST:6.911 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":128,"skipped":2156,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:15:15.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-c73fa2ef-9acc-4ec3-aae6-43a5194fff62
STEP: Creating configMap with name cm-test-opt-upd-018dab41-89cc-4da8-ae59-6e3cb5f4c008
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-c73fa2ef-9acc-4ec3-aae6-43a5194fff62
STEP: Updating configmap cm-test-opt-upd-018dab41-89cc-4da8-ae59-6e3cb5f4c008
STEP: Creating configMap with name cm-test-opt-create-d4b4e8a1-e3a1-4e8b-8be0-45ee7e70a3cc
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:15:27.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1662" for this suite.

• [SLOW TEST:12.494 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":129,"skipped":2204,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:15:27.667: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[BeforeEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1357
STEP: creating an pod
Jul  7 16:15:27.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-6960 -- logs-generator --log-lines-total 100 --run-duration 20s'
Jul  7 16:15:34.073: INFO: stderr: ""
Jul  7 16:15:34.073: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Waiting for log generator to start.
Jul  7 16:15:34.073: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Jul  7 16:15:34.074: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-6960" to be "running and ready, or succeeded"
Jul  7 16:15:34.397: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 323.630805ms
Jul  7 16:15:36.563: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.489544127s
Jul  7 16:15:38.567: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.493632248s
Jul  7 16:15:40.921: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 6.847896052s
Jul  7 16:15:40.921: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Jul  7 16:15:40.922: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Jul  7 16:15:40.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6960'
Jul  7 16:15:41.038: INFO: stderr: ""
Jul  7 16:15:41.038: INFO: stdout: "I0707 16:15:39.330911       1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/59lk 244\nI0707 16:15:39.531061       1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/zgs 467\nI0707 16:15:39.731102       1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/p6b9 366\nI0707 16:15:39.931136       1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/q4cg 324\nI0707 16:15:40.131218       1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/rns 258\nI0707 16:15:40.331205       1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/cnl 568\nI0707 16:15:40.531088       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/6sd 223\nI0707 16:15:40.731101       1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/r8w7 396\nI0707 16:15:40.931118       1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/hxxv 410\n"
STEP: limiting log lines
Jul  7 16:15:41.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6960 --tail=1'
Jul  7 16:15:41.972: INFO: stderr: ""
Jul  7 16:15:41.972: INFO: stdout: "I0707 16:15:41.531084       1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/6vz2 440\nI0707 16:15:41.731418       1 logs_generator.go:76] 12 PUT /api/v1/namespaces/kube-system/pods/xj44 522\nI0707 16:15:41.931148       1 logs_generator.go:76] 13 PUT /api/v1/namespaces/ns/pods/88g 281\n"
Jul  7 16:15:41.972: INFO: got output "I0707 16:15:41.531084       1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/6vz2 440\nI0707 16:15:41.731418       1 logs_generator.go:76] 12 PUT /api/v1/namespaces/kube-system/pods/xj44 522\nI0707 16:15:41.931148       1 logs_generator.go:76] 13 PUT /api/v1/namespaces/ns/pods/88g 281\n"
Jul  7 16:15:41.972: FAIL: Expected
    : 3
to equal
    : 1
[AfterEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363
Jul  7 16:15:41.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-6960'
Jul  7 16:15:45.628: INFO: stderr: ""
Jul  7 16:15:45.628: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
STEP: Collecting events from namespace "kubectl-6960".
STEP: Found 5 events.
Jul  7 16:15:45.699: INFO: At 2020-07-07 16:15:34 +0000 UTC - event for logs-generator: {default-scheduler } Scheduled: Successfully assigned kubectl-6960/logs-generator to jerma-worker2
Jul  7 16:15:45.699: INFO: At 2020-07-07 16:15:36 +0000 UTC - event for logs-generator: {kubelet jerma-worker2} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Jul  7 16:15:45.699: INFO: At 2020-07-07 16:15:39 +0000 UTC - event for logs-generator: {kubelet jerma-worker2} Created: Created container logs-generator
Jul  7 16:15:45.699: INFO: At 2020-07-07 16:15:40 +0000 UTC - event for logs-generator: {kubelet jerma-worker2} Started: Started container logs-generator
Jul  7 16:15:45.699: INFO: At 2020-07-07 16:15:42 +0000 UTC - event for logs-generator: {kubelet jerma-worker2} Killing: Stopping container logs-generator
Jul  7 16:15:45.702: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Jul  7 16:15:45.702: INFO: 
Jul  7 16:15:45.815: INFO: 
Logging node info for node jerma-control-plane
Jul  7 16:15:45.818: INFO: Node Info: &Node{ObjectMeta:{jerma-control-plane   /api/v1/nodes/jerma-control-plane 314b373b-bee5-44b4-b5c2-11dae9a7b0a0 939616 0 2020-07-04 07:50:20 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-control-plane kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-07-07 16:13:27 +0000 UTC,LastTransitionTime:2020-07-04 07:50:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-07-07 16:13:27 +0000 UTC,LastTransitionTime:2020-07-04 07:50:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-07-07 16:13:27 +0000 UTC,LastTransitionTime:2020-07-04 07:50:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-07-07 16:13:27 +0000 UTC,LastTransitionTime:2020-07-04 07:50:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.9,},NodeAddress{Type:Hostname,Address:jerma-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:38019c037cfd4087a82e4827871389a4,SystemUUID:e9de5062-4fa9-4d0b-8ec1-e753d472da92,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.17.5,KubeProxyVersion:v1.17.5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.17.5],SizeBytes:144466737,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.17.5],SizeBytes:132100222,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.17.5],SizeBytes:131244355,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.17.5],SizeBytes:111947057,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.5],SizeBytes:41705951,},ContainerImage{Names:[k8s.gcr.io/pause:3.1],SizeBytes:746479,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jul  7 16:15:45.818: INFO: 
Logging kubelet events for node jerma-control-plane
Jul  7 16:15:45.821: INFO: 
Logging pods the kubelet thinks is on node jerma-control-plane
Jul  7 16:15:45.841: INFO: coredns-6955765f44-pgl6s started at 2020-07-04 07:50:57 +0000 UTC (0+1 container statuses recorded)
Jul  7 16:15:45.841: INFO: 	Container coredns ready: true, restart count 0
Jul  7 16:15:45.841: INFO: kube-controller-manager-jerma-control-plane started at 2020-07-04 07:50:25 +0000 UTC (0+1 container statuses recorded)
Jul  7 16:15:45.841: INFO: 	Container kube-controller-manager ready: true, restart count 2
Jul  7 16:15:45.841: INFO: local-path-provisioner-58f6947c7-87vc8 started at 2020-07-04 07:50:55 +0000 UTC (0+1 container statuses recorded)
Jul  7 16:15:45.841: INFO: 	Container local-path-provisioner ready: true, restart count 0
Jul  7 16:15:45.841: INFO: kube-apiserver-jerma-control-plane started at 2020-07-04 07:50:25 +0000 UTC (0+1 container statuses recorded)
Jul  7 16:15:45.841: INFO: 	Container kube-apiserver ready: true, restart count 0
Jul  7 16:15:45.841: INFO: kindnet-8r2ht started at 2020-07-04 07:50:40 +0000 UTC (0+1 container statuses recorded)
Jul  7 16:15:45.841: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul  7 16:15:45.841: INFO: kube-proxy-c7j2b started at 2020-07-04 07:50:40 +0000 UTC (0+1 container statuses recorded)
Jul  7 16:15:45.841: INFO: 	Container kube-proxy ready: true, restart count 0
Jul  7 16:15:45.841: INFO: coredns-6955765f44-wm87j started at 2020-07-04 07:50:55 +0000 UTC (0+1 container statuses recorded)
Jul  7 16:15:45.841: INFO: 	Container coredns ready: true, restart count 0
Jul  7 16:15:45.841: INFO: kube-scheduler-jerma-control-plane started at 2020-07-04 07:50:25 +0000 UTC (0+1 container statuses recorded)
Jul  7 16:15:45.841: INFO: 	Container kube-scheduler ready: true, restart count 0
Jul  7 16:15:45.841: INFO: etcd-jerma-control-plane started at 2020-07-04 07:50:25 +0000 UTC (0+1 container statuses recorded)
Jul  7 16:15:45.841: INFO: 	Container etcd ready: true, restart count 0
W0707 16:15:45.846068       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul  7 16:15:45.921: INFO: 
Latency metrics for node jerma-control-plane
Jul  7 16:15:45.921: INFO: 
Logging node info for node jerma-worker
Jul  7 16:15:45.924: INFO: Node Info: &Node{ObjectMeta:{jerma-worker   /api/v1/nodes/jerma-worker abd71db9-2f15-47eb-99d2-0b519aa52d9d 939489 0 2020-07-04 07:51:00 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-07-07 16:12:59 +0000 UTC,LastTransitionTime:2020-07-04 07:51:00 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-07-07 16:12:59 +0000 UTC,LastTransitionTime:2020-07-04 07:51:00 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-07-07 16:12:59 +0000 UTC,LastTransitionTime:2020-07-04 07:51:00 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-07-07 16:12:59 +0000 UTC,LastTransitionTime:2020-07-04 07:51:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.10,},NodeAddress{Type:Hostname,Address:jerma-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2a4e1dafbb2b455191da7fffaa26b3da,SystemUUID:60c88e46-3a4e-4689-ad5c-c6a385f95918,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.17.5,KubeProxyVersion:v1.17.5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:07e93f55decdc1224fb8d161edb5617d58e3488c1250168337548ccc3e82f6b7 docker.io/ollivier/clearwater-cassandra:latest],SizeBytes:386164043,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:141a336f17eaf068dbe8da4b01a832033aed5c09e7fa6349ec091ee30b76c9b1 docker.io/ollivier/clearwater-homestead-prov:latest],SizeBytes:360403156,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:8c84761d2d906e344bc6a85a11451d35696cf684305555611df16ce2615ac816 docker.io/ollivier/clearwater-ellis:latest],SizeBytes:351094667,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:19c6d11d2678c44822f07c01c574fed426e3c99003b6af0410f0911d57939d5a docker.io/ollivier/clearwater-homer:latest],SizeBytes:343984685,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:f365f3b72267bef0fd696e4a93c0f3c19fb65ad42a8850fe22873dbadd03fdba docker.io/ollivier/clearwater-astaire:latest],SizeBytes:326777758,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:eb98596100b1553c9814b6185863ec53e743eb0370faeeafe16fc1dfe8d02ec3 docker.io/ollivier/clearwater-bono:latest],SizeBytes:303283801,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:44590682de48854faeccc1f4c7de39cb666014a0c4e3abd93adcccad3208a6e2 docker.io/ollivier/clearwater-sprout:latest],SizeBytes:298307172,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:0b3c89ab451b09e347657d5f85ed99d47ec3e8689b98916af72b23576926b08d docker.io/ollivier/clearwater-homestead:latest],SizeBytes:294847386,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3],SizeBytes:289997247,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:20069a8d9f366dd0f003afa7c4fbcbcd5e9d2b99abae83540c6538fc7cff6b97 docker.io/ollivier/clearwater-ralf:latest],SizeBytes:287124270,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:8ddcfa68c82ebf0b4ce6add019a8f57c024aec453f47a37017cf7dff8680268a docker.io/ollivier/clearwater-chronos:latest],SizeBytes:285184449,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.17.5],SizeBytes:144466737,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.17.5],SizeBytes:132100222,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.17.5],SizeBytes:131244355,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:fee4656ab3b4db6aba14143a8a8e1aa77ac743e3574e7f9ca126a96887505ccc docker.io/aquasec/kube-hunter:latest],SizeBytes:127871601,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.17.5],SizeBytes:111947057,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:c2efaddff058c146b93517d06a3a8066b6e88fecdd98fa6847cb69db22555f04 docker.io/ollivier/clearwater-live-test:latest],SizeBytes:46948523,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.5],SizeBytes:41705951,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:17444032,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:16222606,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:5979eaa13cb8b9b2027f4e75bb350a5af70d73719f2a260fa50f593ef63e857b docker.io/aquasec/kube-bench:latest],SizeBytes:8038593,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:4331310,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:1799936,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:1791163,},ContainerImage{Names:[docker.io/library/busybox@sha256:9ddee63a712cea977267342e8750ecbc60d3aab25f04ceacfa795e6fce341793 docker.io/library/busybox:latest],SizeBytes:767885,},ContainerImage{Names:[k8s.gcr.io/pause:3.1],SizeBytes:746479,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[docker.io/kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 docker.io/kubernetes/pause:latest],SizeBytes:74015,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jul  7 16:15:45.925: INFO: 
Logging kubelet events for node jerma-worker
Jul  7 16:15:45.927: INFO: 
Logging pods the kubelet thinks is on node jerma-worker
Jul  7 16:15:45.931: INFO: kindnet-gnxwn started at 2020-07-04 07:51:00 +0000 UTC (0+1 container statuses recorded)
Jul  7 16:15:45.931: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul  7 16:15:45.931: INFO: kube-proxy-8sp85 started at 2020-07-04 07:51:00 +0000 UTC (0+1 container statuses recorded)
Jul  7 16:15:45.931: INFO: 	Container kube-proxy ready: true, restart count 0
W0707 16:15:45.935199       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul  7 16:15:45.987: INFO: 
Latency metrics for node jerma-worker
Jul  7 16:15:45.987: INFO: 
Logging node info for node jerma-worker2
Jul  7 16:15:45.991: INFO: Node Info: &Node{ObjectMeta:{jerma-worker2   /api/v1/nodes/jerma-worker2 ebdfdee0-59eb-4b9e-9b29-0d243a898833 939594 0 2020-07-04 07:51:01 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-07-07 16:13:23 +0000 UTC,LastTransitionTime:2020-07-04 07:51:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-07-07 16:13:23 +0000 UTC,LastTransitionTime:2020-07-04 07:51:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-07-07 16:13:23 +0000 UTC,LastTransitionTime:2020-07-04 07:51:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-07-07 16:13:23 +0000 UTC,LastTransitionTime:2020-07-04 07:51:21 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.8,},NodeAddress{Type:Hostname,Address:jerma-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:865f791789d143edb40508f1a223db8b,SystemUUID:6852aee5-5226-4e78-ab09-cbd7b39818c3,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.17.5,KubeProxyVersion:v1.17.5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:07e93f55decdc1224fb8d161edb5617d58e3488c1250168337548ccc3e82f6b7 docker.io/ollivier/clearwater-cassandra:latest],SizeBytes:386164043,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:141a336f17eaf068dbe8da4b01a832033aed5c09e7fa6349ec091ee30b76c9b1 docker.io/ollivier/clearwater-homestead-prov:latest],SizeBytes:360403156,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:8c84761d2d906e344bc6a85a11451d35696cf684305555611df16ce2615ac816 docker.io/ollivier/clearwater-ellis:latest],SizeBytes:351094667,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:19c6d11d2678c44822f07c01c574fed426e3c99003b6af0410f0911d57939d5a docker.io/ollivier/clearwater-homer:latest],SizeBytes:343984685,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:f365f3b72267bef0fd696e4a93c0f3c19fb65ad42a8850fe22873dbadd03fdba docker.io/ollivier/clearwater-astaire:latest],SizeBytes:326777758,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:eb98596100b1553c9814b6185863ec53e743eb0370faeeafe16fc1dfe8d02ec3 docker.io/ollivier/clearwater-bono:latest],SizeBytes:303283801,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:44590682de48854faeccc1f4c7de39cb666014a0c4e3abd93adcccad3208a6e2 docker.io/ollivier/clearwater-sprout:latest],SizeBytes:298307172,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:0b3c89ab451b09e347657d5f85ed99d47ec3e8689b98916af72b23576926b08d docker.io/ollivier/clearwater-homestead:latest],SizeBytes:294847386,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:20069a8d9f366dd0f003afa7c4fbcbcd5e9d2b99abae83540c6538fc7cff6b97 docker.io/ollivier/clearwater-ralf:latest],SizeBytes:287124270,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:8ddcfa68c82ebf0b4ce6add019a8f57c024aec453f47a37017cf7dff8680268a docker.io/ollivier/clearwater-chronos:latest],SizeBytes:285184449,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.17.5],SizeBytes:144466737,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.17.5],SizeBytes:132100222,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.17.5],SizeBytes:131244355,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:fee4656ab3b4db6aba14143a8a8e1aa77ac743e3574e7f9ca126a96887505ccc docker.io/aquasec/kube-hunter:latest],SizeBytes:127871601,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.17.5],SizeBytes:111947057,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:c2efaddff058c146b93517d06a3a8066b6e88fecdd98fa6847cb69db22555f04 docker.io/ollivier/clearwater-live-test:latest],SizeBytes:46948523,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.5],SizeBytes:41705951,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:17444032,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:5979eaa13cb8b9b2027f4e75bb350a5af70d73719f2a260fa50f593ef63e857b docker.io/aquasec/kube-bench:latest],SizeBytes:8038593,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:4331310,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:1799936,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:1791163,},ContainerImage{Names:[docker.io/library/busybox@sha256:9ddee63a712cea977267342e8750ecbc60d3aab25f04ceacfa795e6fce341793 docker.io/library/busybox:latest],SizeBytes:767885,},ContainerImage{Names:[k8s.gcr.io/pause:3.1],SizeBytes:746479,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[docker.io/kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 docker.io/kubernetes/pause:latest],SizeBytes:74015,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jul  7 16:15:45.991: INFO: 
Logging kubelet events for node jerma-worker2
Jul  7 16:15:45.995: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2
Jul  7 16:15:46.000: INFO: kindnet-qg8qr started at 2020-07-04 07:51:01 +0000 UTC (0+1 container statuses recorded)
Jul  7 16:15:46.000: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul  7 16:15:46.000: INFO: kube-proxy-b2ncl started at 2020-07-04 07:51:01 +0000 UTC (0+1 container statuses recorded)
Jul  7 16:15:46.000: INFO: 	Container kube-proxy ready: true, restart count 0
W0707 16:15:46.004816       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul  7 16:15:46.043: INFO: 
Latency metrics for node jerma-worker2
Jul  7 16:15:46.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6960" for this suite.

• Failure [18.384 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1353
    should be able to retrieve and filter logs  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721

    Jul  7 16:15:41.972: Expected
        : 3
    to equal
        : 1

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1398
------------------------------
{"msg":"FAILED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":278,"completed":129,"skipped":2207,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:15:46.051: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  7 16:15:49.377: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  7 16:15:51.584: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729735349, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729735349, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729735349, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729735348, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 16:15:53.598: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729735349, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729735349, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729735349, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729735348, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 16:15:55.778: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729735349, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729735349, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729735349, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729735348, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  7 16:15:58.755: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  7 16:15:58.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2562-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:15:59.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8506" for this suite.
STEP: Destroying namespace "webhook-8506-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:14.262 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":130,"skipped":2220,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:16:00.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jul  7 16:16:00.471: INFO: Pod name pod-release: Found 0 pods out of 1
Jul  7 16:16:05.498: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:16:06.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6201" for this suite.

• [SLOW TEST:6.718 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":131,"skipped":2226,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:16:07.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-projected-all-test-volume-5ccd0320-a72b-4ace-858a-b778c7ffc27f
STEP: Creating secret with name secret-projected-all-test-volume-7f13c5d8-f6a9-47da-9564-c1b11270193c
STEP: Creating a pod to test Check all projections for projected volume plugin
Jul  7 16:16:07.708: INFO: Waiting up to 5m0s for pod "projected-volume-1a91f827-a592-474f-b874-da98de7846e9" in namespace "projected-7623" to be "success or failure"
Jul  7 16:16:07.906: INFO: Pod "projected-volume-1a91f827-a592-474f-b874-da98de7846e9": Phase="Pending", Reason="", readiness=false. Elapsed: 197.716219ms
Jul  7 16:16:09.909: INFO: Pod "projected-volume-1a91f827-a592-474f-b874-da98de7846e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.201331126s
Jul  7 16:16:11.970: INFO: Pod "projected-volume-1a91f827-a592-474f-b874-da98de7846e9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.261914499s
Jul  7 16:16:14.066: INFO: Pod "projected-volume-1a91f827-a592-474f-b874-da98de7846e9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.358163619s
Jul  7 16:16:16.070: INFO: Pod "projected-volume-1a91f827-a592-474f-b874-da98de7846e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.361717046s
STEP: Saw pod success
Jul  7 16:16:16.070: INFO: Pod "projected-volume-1a91f827-a592-474f-b874-da98de7846e9" satisfied condition "success or failure"
Jul  7 16:16:16.215: INFO: Trying to get logs from node jerma-worker pod projected-volume-1a91f827-a592-474f-b874-da98de7846e9 container projected-all-volume-test: 
STEP: delete the pod
Jul  7 16:16:16.278: INFO: Waiting for pod projected-volume-1a91f827-a592-474f-b874-da98de7846e9 to disappear
Jul  7 16:16:16.370: INFO: Pod projected-volume-1a91f827-a592-474f-b874-da98de7846e9 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:16:16.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7623" for this suite.

• [SLOW TEST:9.346 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":132,"skipped":2273,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:16:16.378: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-7764
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-7764
I0707 16:16:17.011113       6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-7764, replica count: 2
I0707 16:16:20.061950       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0707 16:16:23.062193       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul  7 16:16:23.062: INFO: Creating new exec pod
Jul  7 16:16:30.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7764 execpod2mw62 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Jul  7 16:16:30.312: INFO: stderr: "I0707 16:16:30.219557    1481 log.go:172] (0xc0001054a0) (0xc0005f99a0) Create stream\nI0707 16:16:30.219615    1481 log.go:172] (0xc0001054a0) (0xc0005f99a0) Stream added, broadcasting: 1\nI0707 16:16:30.222358    1481 log.go:172] (0xc0001054a0) Reply frame received for 1\nI0707 16:16:30.222394    1481 log.go:172] (0xc0001054a0) (0xc000972000) Create stream\nI0707 16:16:30.222404    1481 log.go:172] (0xc0001054a0) (0xc000972000) Stream added, broadcasting: 3\nI0707 16:16:30.223328    1481 log.go:172] (0xc0001054a0) Reply frame received for 3\nI0707 16:16:30.223367    1481 log.go:172] (0xc0001054a0) (0xc0005a2000) Create stream\nI0707 16:16:30.223382    1481 log.go:172] (0xc0001054a0) (0xc0005a2000) Stream added, broadcasting: 5\nI0707 16:16:30.224153    1481 log.go:172] (0xc0001054a0) Reply frame received for 5\nI0707 16:16:30.292936    1481 log.go:172] (0xc0001054a0) Data frame received for 5\nI0707 16:16:30.292958    1481 log.go:172] (0xc0005a2000) (5) Data frame handling\nI0707 16:16:30.292969    1481 log.go:172] (0xc0005a2000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0707 16:16:30.302794    1481 log.go:172] (0xc0001054a0) Data frame received for 5\nI0707 16:16:30.302843    1481 log.go:172] (0xc0005a2000) (5) Data frame handling\nI0707 16:16:30.302881    1481 log.go:172] (0xc0005a2000) (5) Data frame sent\nI0707 16:16:30.302906    1481 log.go:172] (0xc0001054a0) Data frame received for 5\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0707 16:16:30.302932    1481 log.go:172] (0xc0005a2000) (5) Data frame handling\nI0707 16:16:30.303129    1481 log.go:172] (0xc0001054a0) Data frame received for 3\nI0707 16:16:30.303154    1481 log.go:172] (0xc000972000) (3) Data frame handling\nI0707 16:16:30.305099    1481 log.go:172] (0xc0001054a0) Data frame received for 1\nI0707 16:16:30.305285    1481 log.go:172] (0xc0005f99a0) (1) Data frame handling\nI0707 16:16:30.305304    1481 log.go:172] (0xc0005f99a0) (1) Data frame sent\nI0707 16:16:30.305501    1481 log.go:172] (0xc0001054a0) (0xc0005f99a0) Stream removed, broadcasting: 1\nI0707 16:16:30.305711    1481 log.go:172] (0xc0001054a0) Go away received\nI0707 16:16:30.305963    1481 log.go:172] (0xc0001054a0) (0xc0005f99a0) Stream removed, broadcasting: 1\nI0707 16:16:30.305988    1481 log.go:172] (0xc0001054a0) (0xc000972000) Stream removed, broadcasting: 3\nI0707 16:16:30.306002    1481 log.go:172] (0xc0001054a0) (0xc0005a2000) Stream removed, broadcasting: 5\n"
Jul  7 16:16:30.312: INFO: stdout: ""
Jul  7 16:16:30.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7764 execpod2mw62 -- /bin/sh -x -c nc -zv -t -w 2 10.111.22.222 80'
Jul  7 16:16:30.602: INFO: stderr: "I0707 16:16:30.522524    1503 log.go:172] (0xc00085c0b0) (0xc0009a25a0) Create stream\nI0707 16:16:30.522575    1503 log.go:172] (0xc00085c0b0) (0xc0009a25a0) Stream added, broadcasting: 1\nI0707 16:16:30.527267    1503 log.go:172] (0xc00085c0b0) Reply frame received for 1\nI0707 16:16:30.527359    1503 log.go:172] (0xc00085c0b0) (0xc0009a2000) Create stream\nI0707 16:16:30.527389    1503 log.go:172] (0xc00085c0b0) (0xc0009a2000) Stream added, broadcasting: 3\nI0707 16:16:30.528482    1503 log.go:172] (0xc00085c0b0) Reply frame received for 3\nI0707 16:16:30.528521    1503 log.go:172] (0xc00085c0b0) (0xc000676640) Create stream\nI0707 16:16:30.528534    1503 log.go:172] (0xc00085c0b0) (0xc000676640) Stream added, broadcasting: 5\nI0707 16:16:30.529616    1503 log.go:172] (0xc00085c0b0) Reply frame received for 5\nI0707 16:16:30.595311    1503 log.go:172] (0xc00085c0b0) Data frame received for 3\nI0707 16:16:30.595338    1503 log.go:172] (0xc0009a2000) (3) Data frame handling\nI0707 16:16:30.595355    1503 log.go:172] (0xc00085c0b0) Data frame received for 5\nI0707 16:16:30.595361    1503 log.go:172] (0xc000676640) (5) Data frame handling\nI0707 16:16:30.595368    1503 log.go:172] (0xc000676640) (5) Data frame sent\nI0707 16:16:30.595375    1503 log.go:172] (0xc00085c0b0) Data frame received for 5\nI0707 16:16:30.595380    1503 log.go:172] (0xc000676640) (5) Data frame handling\n+ nc -zv -t -w 2 10.111.22.222 80\nConnection to 10.111.22.222 80 port [tcp/http] succeeded!\nI0707 16:16:30.596490    1503 log.go:172] (0xc00085c0b0) Data frame received for 1\nI0707 16:16:30.596519    1503 log.go:172] (0xc0009a25a0) (1) Data frame handling\nI0707 16:16:30.596539    1503 log.go:172] (0xc0009a25a0) (1) Data frame sent\nI0707 16:16:30.596557    1503 log.go:172] (0xc00085c0b0) (0xc0009a25a0) Stream removed, broadcasting: 1\nI0707 16:16:30.596703    1503 log.go:172] (0xc00085c0b0) Go away received\nI0707 16:16:30.596941    1503 log.go:172] (0xc00085c0b0) (0xc0009a25a0) Stream removed, broadcasting: 1\nI0707 16:16:30.596957    1503 log.go:172] (0xc00085c0b0) (0xc0009a2000) Stream removed, broadcasting: 3\nI0707 16:16:30.596974    1503 log.go:172] (0xc00085c0b0) (0xc000676640) Stream removed, broadcasting: 5\n"
Jul  7 16:16:30.602: INFO: stdout: ""
Jul  7 16:16:30.602: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:16:30.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7764" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:14.435 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":133,"skipped":2277,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:16:30.813: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul  7 16:16:30.943: INFO: Waiting up to 5m0s for pod "downwardapi-volume-328f3c58-2305-4916-82f0-8362603c61f1" in namespace "downward-api-2177" to be "success or failure"
Jul  7 16:16:30.952: INFO: Pod "downwardapi-volume-328f3c58-2305-4916-82f0-8362603c61f1": Phase="Pending", Reason="", readiness=false. Elapsed: 9.464414ms
Jul  7 16:16:32.956: INFO: Pod "downwardapi-volume-328f3c58-2305-4916-82f0-8362603c61f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013068515s
Jul  7 16:16:34.960: INFO: Pod "downwardapi-volume-328f3c58-2305-4916-82f0-8362603c61f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017293863s
Jul  7 16:16:37.265: INFO: Pod "downwardapi-volume-328f3c58-2305-4916-82f0-8362603c61f1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.321561937s
Jul  7 16:16:39.845: INFO: Pod "downwardapi-volume-328f3c58-2305-4916-82f0-8362603c61f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.901881536s
STEP: Saw pod success
Jul  7 16:16:39.845: INFO: Pod "downwardapi-volume-328f3c58-2305-4916-82f0-8362603c61f1" satisfied condition "success or failure"
Jul  7 16:16:39.918: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-328f3c58-2305-4916-82f0-8362603c61f1 container client-container: 
STEP: delete the pod
Jul  7 16:16:40.057: INFO: Waiting for pod downwardapi-volume-328f3c58-2305-4916-82f0-8362603c61f1 to disappear
Jul  7 16:16:40.150: INFO: Pod downwardapi-volume-328f3c58-2305-4916-82f0-8362603c61f1 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:16:40.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2177" for this suite.

• [SLOW TEST:9.344 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":134,"skipped":2279,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:16:40.158: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:16:49.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3202" for this suite.

• [SLOW TEST:9.573 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":135,"skipped":2281,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:16:49.731: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-40ad7a65-bed2-43b6-9bd0-9b7a4f77f0aa
STEP: Creating a pod to test consume secrets
Jul  7 16:16:50.231: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cea13a3a-0305-4bba-a0e6-161ba44292a2" in namespace "projected-1351" to be "success or failure"
Jul  7 16:16:50.281: INFO: Pod "pod-projected-secrets-cea13a3a-0305-4bba-a0e6-161ba44292a2": Phase="Pending", Reason="", readiness=false. Elapsed: 50.423501ms
Jul  7 16:16:52.285: INFO: Pod "pod-projected-secrets-cea13a3a-0305-4bba-a0e6-161ba44292a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054400147s
Jul  7 16:16:54.317: INFO: Pod "pod-projected-secrets-cea13a3a-0305-4bba-a0e6-161ba44292a2": Phase="Running", Reason="", readiness=true. Elapsed: 4.086049643s
Jul  7 16:16:56.321: INFO: Pod "pod-projected-secrets-cea13a3a-0305-4bba-a0e6-161ba44292a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.090257403s
STEP: Saw pod success
Jul  7 16:16:56.321: INFO: Pod "pod-projected-secrets-cea13a3a-0305-4bba-a0e6-161ba44292a2" satisfied condition "success or failure"
Jul  7 16:16:56.323: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-cea13a3a-0305-4bba-a0e6-161ba44292a2 container projected-secret-volume-test: 
STEP: delete the pod
Jul  7 16:16:56.572: INFO: Waiting for pod pod-projected-secrets-cea13a3a-0305-4bba-a0e6-161ba44292a2 to disappear
Jul  7 16:16:56.664: INFO: Pod pod-projected-secrets-cea13a3a-0305-4bba-a0e6-161ba44292a2 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:16:56.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1351" for this suite.

• [SLOW TEST:7.072 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2292,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:16:56.804: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5080 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5080;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5080 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5080;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5080.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5080.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5080.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5080.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5080.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5080.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5080.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5080.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5080.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5080.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5080.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5080.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5080.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 230.245.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.245.230_udp@PTR;check="$$(dig +tcp +noall +answer +search 230.245.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.245.230_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5080 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5080;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5080 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5080;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5080.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5080.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5080.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5080.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5080.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5080.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5080.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5080.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5080.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5080.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5080.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5080.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5080.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 230.245.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.245.230_udp@PTR;check="$$(dig +tcp +noall +answer +search 230.245.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.245.230_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul  7 16:17:11.385: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:11.462: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:11.546: INFO: Unable to read wheezy_udp@dns-test-service.dns-5080 from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:11.550: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5080 from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:11.553: INFO: Unable to read wheezy_udp@dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:11.556: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:11.559: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:11.562: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:11.579: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:11.582: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:11.584: INFO: Unable to read jessie_udp@dns-test-service.dns-5080 from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:11.587: INFO: Unable to read jessie_tcp@dns-test-service.dns-5080 from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:11.590: INFO: Unable to read jessie_udp@dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:11.592: INFO: Unable to read jessie_tcp@dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:11.596: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:11.598: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:11.854: INFO: Lookups using dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5080 wheezy_tcp@dns-test-service.dns-5080 wheezy_udp@dns-test-service.dns-5080.svc wheezy_tcp@dns-test-service.dns-5080.svc wheezy_udp@_http._tcp.dns-test-service.dns-5080.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5080.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5080 jessie_tcp@dns-test-service.dns-5080 jessie_udp@dns-test-service.dns-5080.svc jessie_tcp@dns-test-service.dns-5080.svc jessie_udp@_http._tcp.dns-test-service.dns-5080.svc jessie_tcp@_http._tcp.dns-test-service.dns-5080.svc]

Jul  7 16:17:16.859: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:16.864: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:16.867: INFO: Unable to read wheezy_udp@dns-test-service.dns-5080 from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:16.871: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5080 from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:16.874: INFO: Unable to read wheezy_udp@dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:16.878: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:16.881: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:16.884: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:16.905: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:16.908: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:16.913: INFO: Unable to read jessie_udp@dns-test-service.dns-5080 from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:16.916: INFO: Unable to read jessie_tcp@dns-test-service.dns-5080 from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:16.919: INFO: Unable to read jessie_udp@dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:16.922: INFO: Unable to read jessie_tcp@dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:16.924: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:16.926: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:17.005: INFO: Lookups using dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5080 wheezy_tcp@dns-test-service.dns-5080 wheezy_udp@dns-test-service.dns-5080.svc wheezy_tcp@dns-test-service.dns-5080.svc wheezy_udp@_http._tcp.dns-test-service.dns-5080.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5080.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5080 jessie_tcp@dns-test-service.dns-5080 jessie_udp@dns-test-service.dns-5080.svc jessie_tcp@dns-test-service.dns-5080.svc jessie_udp@_http._tcp.dns-test-service.dns-5080.svc jessie_tcp@_http._tcp.dns-test-service.dns-5080.svc]

Jul  7 16:17:22.026: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:22.029: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:22.032: INFO: Unable to read wheezy_udp@dns-test-service.dns-5080 from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:22.034: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5080 from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:22.037: INFO: Unable to read wheezy_udp@dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:22.039: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:22.041: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:22.043: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:22.588: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:22.613: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:22.617: INFO: Unable to read jessie_udp@dns-test-service.dns-5080 from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:22.620: INFO: Unable to read jessie_tcp@dns-test-service.dns-5080 from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:22.623: INFO: Unable to read jessie_udp@dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:22.626: INFO: Unable to read jessie_tcp@dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:22.629: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:22.631: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:22.645: INFO: Lookups using dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5080 wheezy_tcp@dns-test-service.dns-5080 wheezy_udp@dns-test-service.dns-5080.svc wheezy_tcp@dns-test-service.dns-5080.svc wheezy_udp@_http._tcp.dns-test-service.dns-5080.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5080.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5080 jessie_tcp@dns-test-service.dns-5080 jessie_udp@dns-test-service.dns-5080.svc jessie_tcp@dns-test-service.dns-5080.svc jessie_udp@_http._tcp.dns-test-service.dns-5080.svc jessie_tcp@_http._tcp.dns-test-service.dns-5080.svc]

Jul  7 16:17:26.858: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:26.860: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:26.862: INFO: Unable to read wheezy_udp@dns-test-service.dns-5080 from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:26.864: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5080 from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:26.867: INFO: Unable to read wheezy_udp@dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:26.869: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:26.871: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:26.874: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:26.941: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:26.943: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:26.945: INFO: Unable to read jessie_udp@dns-test-service.dns-5080 from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:26.947: INFO: Unable to read jessie_tcp@dns-test-service.dns-5080 from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:26.949: INFO: Unable to read jessie_udp@dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:26.951: INFO: Unable to read jessie_tcp@dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:26.953: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:26.955: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:27.031: INFO: Lookups using dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5080 wheezy_tcp@dns-test-service.dns-5080 wheezy_udp@dns-test-service.dns-5080.svc wheezy_tcp@dns-test-service.dns-5080.svc wheezy_udp@_http._tcp.dns-test-service.dns-5080.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5080.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5080 jessie_tcp@dns-test-service.dns-5080 jessie_udp@dns-test-service.dns-5080.svc jessie_tcp@dns-test-service.dns-5080.svc jessie_udp@_http._tcp.dns-test-service.dns-5080.svc jessie_tcp@_http._tcp.dns-test-service.dns-5080.svc]

Jul  7 16:17:31.989: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:32.045: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:32.048: INFO: Unable to read wheezy_udp@dns-test-service.dns-5080 from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:32.051: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5080 from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:32.054: INFO: Unable to read wheezy_udp@dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:32.057: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:32.060: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:32.063: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:32.084: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:32.088: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:32.181: INFO: Unable to read jessie_udp@dns-test-service.dns-5080 from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:32.184: INFO: Unable to read jessie_tcp@dns-test-service.dns-5080 from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:32.187: INFO: Unable to read jessie_udp@dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:32.190: INFO: Unable to read jessie_tcp@dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:32.193: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:32.196: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:32.230: INFO: Lookups using dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5080 wheezy_tcp@dns-test-service.dns-5080 wheezy_udp@dns-test-service.dns-5080.svc wheezy_tcp@dns-test-service.dns-5080.svc wheezy_udp@_http._tcp.dns-test-service.dns-5080.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5080.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5080 jessie_tcp@dns-test-service.dns-5080 jessie_udp@dns-test-service.dns-5080.svc jessie_tcp@dns-test-service.dns-5080.svc jessie_udp@_http._tcp.dns-test-service.dns-5080.svc jessie_tcp@_http._tcp.dns-test-service.dns-5080.svc]

Jul  7 16:17:37.255: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:37.347: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:37.349: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:37.351: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:37.499: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:37.501: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:37.503: INFO: Unable to read jessie_udp@dns-test-service.dns-5080 from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:37.506: INFO: Unable to read jessie_tcp@dns-test-service.dns-5080 from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:37.508: INFO: Unable to read jessie_udp@dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:37.511: INFO: Unable to read jessie_tcp@dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:37.514: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:37.515: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5080.svc from pod dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b: the server could not find the requested resource (get pods dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b)
Jul  7 16:17:37.541: INFO: Lookups using dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.dns-5080.svc wheezy_udp@_http._tcp.dns-test-service.dns-5080.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5080.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5080 jessie_tcp@dns-test-service.dns-5080 jessie_udp@dns-test-service.dns-5080.svc jessie_tcp@dns-test-service.dns-5080.svc jessie_udp@_http._tcp.dns-test-service.dns-5080.svc jessie_tcp@_http._tcp.dns-test-service.dns-5080.svc]

Jul  7 16:17:43.054: INFO: DNS probes using dns-5080/dns-test-904c1cfb-57b4-48bd-a9d6-0d662137353b succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:17:47.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5080" for this suite.

• [SLOW TEST:50.564 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":137,"skipped":2379,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:17:47.368: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  7 16:17:48.364: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-a74841c6-3dff-495f-acf6-904aeb71c236" in namespace "security-context-test-2940" to be "success or failure"
Jul  7 16:17:48.959: INFO: Pod "busybox-readonly-false-a74841c6-3dff-495f-acf6-904aeb71c236": Phase="Pending", Reason="", readiness=false. Elapsed: 594.831204ms
Jul  7 16:17:50.977: INFO: Pod "busybox-readonly-false-a74841c6-3dff-495f-acf6-904aeb71c236": Phase="Pending", Reason="", readiness=false. Elapsed: 2.612614054s
Jul  7 16:17:53.030: INFO: Pod "busybox-readonly-false-a74841c6-3dff-495f-acf6-904aeb71c236": Phase="Pending", Reason="", readiness=false. Elapsed: 4.666140686s
Jul  7 16:17:55.187: INFO: Pod "busybox-readonly-false-a74841c6-3dff-495f-acf6-904aeb71c236": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.822645982s
Jul  7 16:17:55.187: INFO: Pod "busybox-readonly-false-a74841c6-3dff-495f-acf6-904aeb71c236" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:17:55.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2940" for this suite.

• [SLOW TEST:8.747 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a pod with readOnlyRootFilesystem
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:164
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":138,"skipped":2427,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:17:56.115: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  7 16:17:58.530: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:18:00.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-144" for this suite.

• [SLOW TEST:5.454 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":278,"completed":139,"skipped":2453,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:18:01.570: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jul  7 16:18:23.130: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  7 16:18:23.142: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  7 16:18:25.142: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  7 16:18:25.867: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  7 16:18:27.142: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  7 16:18:27.146: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  7 16:18:29.142: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  7 16:18:29.441: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  7 16:18:31.142: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  7 16:18:31.146: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  7 16:18:33.142: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  7 16:18:33.146: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  7 16:18:35.142: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  7 16:18:35.296: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  7 16:18:37.142: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  7 16:18:37.164: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:18:37.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-347" for this suite.

• [SLOW TEST:35.606 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":140,"skipped":2468,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:18:37.176: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-b468f376-dd16-44ee-882f-8ce1ada8243d
STEP: Creating a pod to test consume secrets
Jul  7 16:18:37.412: INFO: Waiting up to 5m0s for pod "pod-secrets-196357a8-98dc-44ca-a31c-3575dba21edb" in namespace "secrets-7110" to be "success or failure"
Jul  7 16:18:37.414: INFO: Pod "pod-secrets-196357a8-98dc-44ca-a31c-3575dba21edb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.234524ms
Jul  7 16:18:39.686: INFO: Pod "pod-secrets-196357a8-98dc-44ca-a31c-3575dba21edb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.273629413s
Jul  7 16:18:41.692: INFO: Pod "pod-secrets-196357a8-98dc-44ca-a31c-3575dba21edb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.279919153s
Jul  7 16:18:43.966: INFO: Pod "pod-secrets-196357a8-98dc-44ca-a31c-3575dba21edb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.55414358s
Jul  7 16:18:45.970: INFO: Pod "pod-secrets-196357a8-98dc-44ca-a31c-3575dba21edb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.558404698s
STEP: Saw pod success
Jul  7 16:18:45.970: INFO: Pod "pod-secrets-196357a8-98dc-44ca-a31c-3575dba21edb" satisfied condition "success or failure"
Jul  7 16:18:45.973: INFO: Trying to get logs from node jerma-worker pod pod-secrets-196357a8-98dc-44ca-a31c-3575dba21edb container secret-volume-test: 
STEP: delete the pod
Jul  7 16:18:46.028: INFO: Waiting for pod pod-secrets-196357a8-98dc-44ca-a31c-3575dba21edb to disappear
Jul  7 16:18:46.086: INFO: Pod pod-secrets-196357a8-98dc-44ca-a31c-3575dba21edb no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:18:46.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7110" for this suite.

• [SLOW TEST:8.918 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2468,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:18:46.095: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  7 16:18:46.262: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-01a07112-23e8-4399-8c8a-3542009ec5ff" in namespace "security-context-test-135" to be "success or failure"
Jul  7 16:18:46.290: INFO: Pod "alpine-nnp-false-01a07112-23e8-4399-8c8a-3542009ec5ff": Phase="Pending", Reason="", readiness=false. Elapsed: 28.250025ms
Jul  7 16:18:48.295: INFO: Pod "alpine-nnp-false-01a07112-23e8-4399-8c8a-3542009ec5ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03283333s
Jul  7 16:18:50.299: INFO: Pod "alpine-nnp-false-01a07112-23e8-4399-8c8a-3542009ec5ff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036592921s
Jul  7 16:18:52.303: INFO: Pod "alpine-nnp-false-01a07112-23e8-4399-8c8a-3542009ec5ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040630462s
Jul  7 16:18:52.303: INFO: Pod "alpine-nnp-false-01a07112-23e8-4399-8c8a-3542009ec5ff" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:18:52.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-135" for this suite.

• [SLOW TEST:6.236 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when creating containers with AllowPrivilegeEscalation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":142,"skipped":2550,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:18:52.331: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-6e9e374b-8ea1-45e6-8a1f-da0f5568404b
STEP: Creating a pod to test consume secrets
Jul  7 16:18:52.814: INFO: Waiting up to 5m0s for pod "pod-secrets-a57f046b-af68-4a6d-8f83-e6ffde5f6043" in namespace "secrets-1309" to be "success or failure"
Jul  7 16:18:52.831: INFO: Pod "pod-secrets-a57f046b-af68-4a6d-8f83-e6ffde5f6043": Phase="Pending", Reason="", readiness=false. Elapsed: 16.739148ms
Jul  7 16:18:55.075: INFO: Pod "pod-secrets-a57f046b-af68-4a6d-8f83-e6ffde5f6043": Phase="Pending", Reason="", readiness=false. Elapsed: 2.260531526s
Jul  7 16:18:57.079: INFO: Pod "pod-secrets-a57f046b-af68-4a6d-8f83-e6ffde5f6043": Phase="Running", Reason="", readiness=true. Elapsed: 4.264789226s
Jul  7 16:18:59.083: INFO: Pod "pod-secrets-a57f046b-af68-4a6d-8f83-e6ffde5f6043": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.268843768s
STEP: Saw pod success
Jul  7 16:18:59.083: INFO: Pod "pod-secrets-a57f046b-af68-4a6d-8f83-e6ffde5f6043" satisfied condition "success or failure"
Jul  7 16:18:59.085: INFO: Trying to get logs from node jerma-worker pod pod-secrets-a57f046b-af68-4a6d-8f83-e6ffde5f6043 container secret-volume-test: 
STEP: delete the pod
Jul  7 16:18:59.126: INFO: Waiting for pod pod-secrets-a57f046b-af68-4a6d-8f83-e6ffde5f6043 to disappear
Jul  7 16:18:59.136: INFO: Pod pod-secrets-a57f046b-af68-4a6d-8f83-e6ffde5f6043 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:18:59.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1309" for this suite.

• [SLOW TEST:6.813 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":143,"skipped":2551,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:18:59.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap that has name configmap-test-emptyKey-fafc6511-a5ec-4b3e-8451-bb3df8b5d059
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:18:59.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1420" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":144,"skipped":2574,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:18:59.266: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Jul  7 16:18:59.387: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Jul  7 16:19:10.465: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:19:10.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8600" for this suite.

• [SLOW TEST:11.209 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2605,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:19:10.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-qb6n
STEP: Creating a pod to test atomic-volume-subpath
Jul  7 16:19:11.941: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-qb6n" in namespace "subpath-4731" to be "success or failure"
Jul  7 16:19:12.596: INFO: Pod "pod-subpath-test-configmap-qb6n": Phase="Pending", Reason="", readiness=false. Elapsed: 655.280165ms
Jul  7 16:19:14.599: INFO: Pod "pod-subpath-test-configmap-qb6n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.658657161s
Jul  7 16:19:16.991: INFO: Pod "pod-subpath-test-configmap-qb6n": Phase="Pending", Reason="", readiness=false. Elapsed: 5.050582939s
Jul  7 16:19:19.105: INFO: Pod "pod-subpath-test-configmap-qb6n": Phase="Pending", Reason="", readiness=false. Elapsed: 7.164328473s
Jul  7 16:19:21.299: INFO: Pod "pod-subpath-test-configmap-qb6n": Phase="Running", Reason="", readiness=true. Elapsed: 9.358072143s
Jul  7 16:19:23.302: INFO: Pod "pod-subpath-test-configmap-qb6n": Phase="Running", Reason="", readiness=true. Elapsed: 11.361282399s
Jul  7 16:19:25.306: INFO: Pod "pod-subpath-test-configmap-qb6n": Phase="Running", Reason="", readiness=true. Elapsed: 13.365277388s
Jul  7 16:19:27.311: INFO: Pod "pod-subpath-test-configmap-qb6n": Phase="Running", Reason="", readiness=true. Elapsed: 15.370223075s
Jul  7 16:19:29.315: INFO: Pod "pod-subpath-test-configmap-qb6n": Phase="Running", Reason="", readiness=true. Elapsed: 17.374827514s
Jul  7 16:19:31.425: INFO: Pod "pod-subpath-test-configmap-qb6n": Phase="Running", Reason="", readiness=true. Elapsed: 19.484547816s
Jul  7 16:19:33.473: INFO: Pod "pod-subpath-test-configmap-qb6n": Phase="Running", Reason="", readiness=true. Elapsed: 21.532457552s
Jul  7 16:19:35.477: INFO: Pod "pod-subpath-test-configmap-qb6n": Phase="Running", Reason="", readiness=true. Elapsed: 23.536346608s
Jul  7 16:19:37.482: INFO: Pod "pod-subpath-test-configmap-qb6n": Phase="Running", Reason="", readiness=true. Elapsed: 25.541091794s
Jul  7 16:19:39.486: INFO: Pod "pod-subpath-test-configmap-qb6n": Phase="Running", Reason="", readiness=true. Elapsed: 27.545413412s
Jul  7 16:19:41.490: INFO: Pod "pod-subpath-test-configmap-qb6n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.549678996s
STEP: Saw pod success
Jul  7 16:19:41.490: INFO: Pod "pod-subpath-test-configmap-qb6n" satisfied condition "success or failure"
Jul  7 16:19:41.493: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-qb6n container test-container-subpath-configmap-qb6n: 
STEP: delete the pod
Jul  7 16:19:41.600: INFO: Waiting for pod pod-subpath-test-configmap-qb6n to disappear
Jul  7 16:19:41.634: INFO: Pod pod-subpath-test-configmap-qb6n no longer exists
STEP: Deleting pod pod-subpath-test-configmap-qb6n
Jul  7 16:19:41.634: INFO: Deleting pod "pod-subpath-test-configmap-qb6n" in namespace "subpath-4731"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:19:41.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4731" for this suite.

• [SLOW TEST:31.168 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":146,"skipped":2634,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:19:41.644: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul  7 16:19:42.428: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9f5ec63e-63cd-43cc-a87b-ed82e21385a2" in namespace "downward-api-410" to be "success or failure"
Jul  7 16:19:42.554: INFO: Pod "downwardapi-volume-9f5ec63e-63cd-43cc-a87b-ed82e21385a2": Phase="Pending", Reason="", readiness=false. Elapsed: 126.290757ms
Jul  7 16:19:44.740: INFO: Pod "downwardapi-volume-9f5ec63e-63cd-43cc-a87b-ed82e21385a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.311992127s
Jul  7 16:19:46.789: INFO: Pod "downwardapi-volume-9f5ec63e-63cd-43cc-a87b-ed82e21385a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.361245422s
STEP: Saw pod success
Jul  7 16:19:46.789: INFO: Pod "downwardapi-volume-9f5ec63e-63cd-43cc-a87b-ed82e21385a2" satisfied condition "success or failure"
Jul  7 16:19:46.925: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-9f5ec63e-63cd-43cc-a87b-ed82e21385a2 container client-container: 
STEP: delete the pod
Jul  7 16:19:47.008: INFO: Waiting for pod downwardapi-volume-9f5ec63e-63cd-43cc-a87b-ed82e21385a2 to disappear
Jul  7 16:19:47.022: INFO: Pod downwardapi-volume-9f5ec63e-63cd-43cc-a87b-ed82e21385a2 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:19:47.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-410" for this suite.

• [SLOW TEST:5.426 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":147,"skipped":2641,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:19:47.071: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  7 16:19:47.157: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jul  7 16:19:49.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4567 create -f -'
Jul  7 16:19:59.890: INFO: stderr: ""
Jul  7 16:19:59.890: INFO: stdout: "e2e-test-crd-publish-openapi-5188-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Jul  7 16:19:59.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4567 delete e2e-test-crd-publish-openapi-5188-crds test-cr'
Jul  7 16:20:00.477: INFO: stderr: ""
Jul  7 16:20:00.477: INFO: stdout: "e2e-test-crd-publish-openapi-5188-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Jul  7 16:20:00.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4567 apply -f -'
Jul  7 16:20:00.754: INFO: stderr: ""
Jul  7 16:20:00.754: INFO: stdout: "e2e-test-crd-publish-openapi-5188-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Jul  7 16:20:00.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4567 delete e2e-test-crd-publish-openapi-5188-crds test-cr'
Jul  7 16:20:00.853: INFO: stderr: ""
Jul  7 16:20:00.853: INFO: stdout: "e2e-test-crd-publish-openapi-5188-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Jul  7 16:20:00.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5188-crds'
Jul  7 16:20:01.087: INFO: stderr: ""
Jul  7 16:20:01.087: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5188-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:20:02.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4567" for this suite.

• [SLOW TEST:15.881 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":148,"skipped":2658,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:20:02.952: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:20:07.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-1374" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":149,"skipped":2664,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}

------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:20:07.331: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating cluster-info
Jul  7 16:20:07.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jul  7 16:20:07.859: INFO: stderr: ""
Jul  7 16:20:07.860: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32777\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32777/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:20:07.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8630" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":278,"completed":150,"skipped":2664,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:20:07.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name projected-secret-test-9b8bfbaa-7720-4564-bd79-f6bbcae7ca2d
STEP: Creating a pod to test consume secrets
Jul  7 16:20:08.032: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-74a5631c-8f76-4428-a786-65a76f8703be" in namespace "projected-2586" to be "success or failure"
Jul  7 16:20:08.036: INFO: Pod "pod-projected-secrets-74a5631c-8f76-4428-a786-65a76f8703be": Phase="Pending", Reason="", readiness=false. Elapsed: 3.059705ms
Jul  7 16:20:10.072: INFO: Pod "pod-projected-secrets-74a5631c-8f76-4428-a786-65a76f8703be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039514897s
Jul  7 16:20:12.141: INFO: Pod "pod-projected-secrets-74a5631c-8f76-4428-a786-65a76f8703be": Phase="Pending", Reason="", readiness=false. Elapsed: 4.108416962s
Jul  7 16:20:14.317: INFO: Pod "pod-projected-secrets-74a5631c-8f76-4428-a786-65a76f8703be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.284406692s
STEP: Saw pod success
Jul  7 16:20:14.317: INFO: Pod "pod-projected-secrets-74a5631c-8f76-4428-a786-65a76f8703be" satisfied condition "success or failure"
Jul  7 16:20:14.320: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-74a5631c-8f76-4428-a786-65a76f8703be container secret-volume-test: 
STEP: delete the pod
Jul  7 16:20:14.479: INFO: Waiting for pod pod-projected-secrets-74a5631c-8f76-4428-a786-65a76f8703be to disappear
Jul  7 16:20:14.495: INFO: Pod pod-projected-secrets-74a5631c-8f76-4428-a786-65a76f8703be no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:20:14.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2586" for this suite.

• [SLOW TEST:6.639 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":151,"skipped":2675,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:20:14.505: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:20:15.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5565" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":152,"skipped":2680,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:20:16.310: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Jul  7 16:20:17.550: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Jul  7 16:20:28.201: INFO: >>> kubeConfig: /root/.kube/config
Jul  7 16:20:31.312: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:20:40.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2738" for this suite.

• [SLOW TEST:24.554 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":153,"skipped":2701,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:20:40.864: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[BeforeEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1489
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jul  7 16:20:40.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-9019'
Jul  7 16:20:41.076: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul  7 16:20:41.076: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created
[AfterEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1495
Jul  7 16:20:43.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-9019'
Jul  7 16:20:43.704: INFO: stderr: ""
Jul  7 16:20:43.704: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:20:43.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9019" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image  [Conformance]","total":278,"completed":154,"skipped":2713,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:20:44.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating all guestbook components
Jul  7 16:20:44.665: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Jul  7 16:20:44.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4804'
Jul  7 16:20:45.354: INFO: stderr: ""
Jul  7 16:20:45.354: INFO: stdout: "service/agnhost-slave created\n"
Jul  7 16:20:45.354: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Jul  7 16:20:45.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4804'
Jul  7 16:20:45.702: INFO: stderr: ""
Jul  7 16:20:45.702: INFO: stdout: "service/agnhost-master created\n"
Jul  7 16:20:45.702: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jul  7 16:20:45.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4804'
Jul  7 16:20:46.981: INFO: stderr: ""
Jul  7 16:20:46.981: INFO: stdout: "service/frontend created\n"
Jul  7 16:20:46.982: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Jul  7 16:20:46.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4804'
Jul  7 16:20:47.515: INFO: stderr: ""
Jul  7 16:20:47.515: INFO: stdout: "deployment.apps/frontend created\n"
Jul  7 16:20:47.516: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jul  7 16:20:47.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4804'
Jul  7 16:20:47.851: INFO: stderr: ""
Jul  7 16:20:47.851: INFO: stdout: "deployment.apps/agnhost-master created\n"
Jul  7 16:20:47.851: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jul  7 16:20:47.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4804'
Jul  7 16:20:48.138: INFO: stderr: ""
Jul  7 16:20:48.138: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Jul  7 16:20:48.138: INFO: Waiting for all frontend pods to be Running.
Jul  7 16:20:58.188: INFO: Waiting for frontend to serve content.
Jul  7 16:20:58.233: INFO: Trying to add a new entry to the guestbook.
Jul  7 16:20:59.243: INFO: Verifying that added entry can be retrieved.
Jul  7 16:20:59.295: INFO: Failed to get response from guestbook. err: , response: {"data":""}
STEP: using delete to clean up resources
Jul  7 16:21:04.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4804'
Jul  7 16:21:04.460: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  7 16:21:04.460: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Jul  7 16:21:04.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4804'
Jul  7 16:21:04.630: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  7 16:21:04.630: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Jul  7 16:21:04.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4804'
Jul  7 16:21:04.820: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  7 16:21:04.820: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jul  7 16:21:04.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4804'
Jul  7 16:21:04.952: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  7 16:21:04.952: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jul  7 16:21:04.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4804'
Jul  7 16:21:05.551: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  7 16:21:05.551: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Jul  7 16:21:05.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4804'
Jul  7 16:21:05.910: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  7 16:21:05.910: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:21:05.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4804" for this suite.

• [SLOW TEST:22.170 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:380
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":278,"completed":155,"skipped":2716,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:21:06.195: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jul  7 16:21:28.036: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul  7 16:21:28.051: INFO: Pod pod-with-poststart-http-hook still exists
Jul  7 16:21:30.052: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul  7 16:21:30.070: INFO: Pod pod-with-poststart-http-hook still exists
Jul  7 16:21:32.052: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul  7 16:21:32.056: INFO: Pod pod-with-poststart-http-hook still exists
Jul  7 16:21:34.052: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul  7 16:21:34.056: INFO: Pod pod-with-poststart-http-hook still exists
Jul  7 16:21:36.052: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul  7 16:21:36.056: INFO: Pod pod-with-poststart-http-hook still exists
Jul  7 16:21:38.052: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul  7 16:21:38.056: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:21:38.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4222" for this suite.

• [SLOW TEST:31.870 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2736,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:21:38.066: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-secret-qtl2
STEP: Creating a pod to test atomic-volume-subpath
Jul  7 16:21:38.179: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-qtl2" in namespace "subpath-2488" to be "success or failure"
Jul  7 16:21:38.223: INFO: Pod "pod-subpath-test-secret-qtl2": Phase="Pending", Reason="", readiness=false. Elapsed: 43.620665ms
Jul  7 16:21:40.501: INFO: Pod "pod-subpath-test-secret-qtl2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.32253423s
Jul  7 16:21:42.506: INFO: Pod "pod-subpath-test-secret-qtl2": Phase="Running", Reason="", readiness=true. Elapsed: 4.326718677s
Jul  7 16:21:44.510: INFO: Pod "pod-subpath-test-secret-qtl2": Phase="Running", Reason="", readiness=true. Elapsed: 6.33107193s
Jul  7 16:21:46.514: INFO: Pod "pod-subpath-test-secret-qtl2": Phase="Running", Reason="", readiness=true. Elapsed: 8.335502951s
Jul  7 16:21:48.519: INFO: Pod "pod-subpath-test-secret-qtl2": Phase="Running", Reason="", readiness=true. Elapsed: 10.339824497s
Jul  7 16:21:50.652: INFO: Pod "pod-subpath-test-secret-qtl2": Phase="Running", Reason="", readiness=true. Elapsed: 12.472761935s
Jul  7 16:21:52.655: INFO: Pod "pod-subpath-test-secret-qtl2": Phase="Running", Reason="", readiness=true. Elapsed: 14.476359184s
Jul  7 16:21:54.659: INFO: Pod "pod-subpath-test-secret-qtl2": Phase="Running", Reason="", readiness=true. Elapsed: 16.480317216s
Jul  7 16:21:56.664: INFO: Pod "pod-subpath-test-secret-qtl2": Phase="Running", Reason="", readiness=true. Elapsed: 18.484704644s
Jul  7 16:21:58.667: INFO: Pod "pod-subpath-test-secret-qtl2": Phase="Running", Reason="", readiness=true. Elapsed: 20.488586233s
Jul  7 16:22:00.672: INFO: Pod "pod-subpath-test-secret-qtl2": Phase="Running", Reason="", readiness=true. Elapsed: 22.493485587s
Jul  7 16:22:02.676: INFO: Pod "pod-subpath-test-secret-qtl2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.497155188s
STEP: Saw pod success
Jul  7 16:22:02.676: INFO: Pod "pod-subpath-test-secret-qtl2" satisfied condition "success or failure"
Jul  7 16:22:02.679: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-secret-qtl2 container test-container-subpath-secret-qtl2: 
STEP: delete the pod
Jul  7 16:22:02.783: INFO: Waiting for pod pod-subpath-test-secret-qtl2 to disappear
Jul  7 16:22:03.039: INFO: Pod pod-subpath-test-secret-qtl2 no longer exists
STEP: Deleting pod pod-subpath-test-secret-qtl2
Jul  7 16:22:03.039: INFO: Deleting pod "pod-subpath-test-secret-qtl2" in namespace "subpath-2488"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:22:03.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2488" for this suite.

• [SLOW TEST:25.049 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":157,"skipped":2762,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:22:03.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Jul  7 16:22:04.157: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:22:10.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4887" for this suite.

• [SLOW TEST:7.663 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":158,"skipped":2814,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:22:10.780: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating pod
Jul  7 16:22:15.197: INFO: Pod pod-hostip-f990f924-d085-400d-b97c-885b6ce207bd has hostIP: 172.17.0.10
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:22:15.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9281" for this suite.
•{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":159,"skipped":2858,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:22:15.206: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:22:15.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-1572" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":160,"skipped":2936,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:22:15.480: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jul  7 16:22:15.578: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul  7 16:22:15.599: INFO: Waiting for terminating namespaces to be deleted...
Jul  7 16:22:15.602: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Jul  7 16:22:15.607: INFO: kindnet-gnxwn from kube-system started at 2020-07-04 07:51:00 +0000 UTC (1 container statuses recorded)
Jul  7 16:22:15.607: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul  7 16:22:15.607: INFO: pod-hostip-f990f924-d085-400d-b97c-885b6ce207bd from pods-9281 started at 2020-07-07 16:22:11 +0000 UTC (1 container statuses recorded)
Jul  7 16:22:15.607: INFO: 	Container test ready: true, restart count 0
Jul  7 16:22:15.607: INFO: kube-proxy-8sp85 from kube-system started at 2020-07-04 07:51:00 +0000 UTC (1 container statuses recorded)
Jul  7 16:22:15.607: INFO: 	Container kube-proxy ready: true, restart count 0
Jul  7 16:22:15.607: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Jul  7 16:22:15.625: INFO: kube-proxy-b2ncl from kube-system started at 2020-07-04 07:51:01 +0000 UTC (1 container statuses recorded)
Jul  7 16:22:15.625: INFO: 	Container kube-proxy ready: true, restart count 0
Jul  7 16:22:15.625: INFO: kindnet-qg8qr from kube-system started at 2020-07-04 07:51:01 +0000 UTC (1 container statuses recorded)
Jul  7 16:22:15.625: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul  7 16:22:15.625: INFO: pod-init-29b426ea-6d74-44dd-820f-d2e3eb80afee from init-container-4887 started at 2020-07-07 16:22:04 +0000 UTC (1 container statuses recorded)
Jul  7 16:22:15.625: INFO: 	Container run1 ready: false, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.161f84ea7640094d], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:22:16.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1352" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":278,"completed":161,"skipped":2937,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:22:17.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-9b815cab-6cc0-41b6-9eb3-cc400801c221 in namespace container-probe-9041
Jul  7 16:22:21.842: INFO: Started pod busybox-9b815cab-6cc0-41b6-9eb3-cc400801c221 in namespace container-probe-9041
STEP: checking the pod's current state and verifying that restartCount is present
Jul  7 16:22:21.844: INFO: Initial restart count of pod busybox-9b815cab-6cc0-41b6-9eb3-cc400801c221 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:26:22.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9041" for this suite.

• [SLOW TEST:245.714 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2942,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:26:22.798: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:26:40.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9570" for this suite.

• [SLOW TEST:17.290 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":163,"skipped":2947,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:26:40.088: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[BeforeEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1585
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jul  7 16:26:40.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-2171'
Jul  7 16:26:40.941: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul  7 16:26:40.941: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
Jul  7 16:26:40.986: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Jul  7 16:26:41.014: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Jul  7 16:26:41.606: INFO: scanned /root for discovery docs: 
Jul  7 16:26:41.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-2171'
Jul  7 16:26:59.588: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jul  7 16:26:59.588: INFO: stdout: "Created e2e-test-httpd-rc-2d59d707b4892080ee65e19334152858\nScaling up e2e-test-httpd-rc-2d59d707b4892080ee65e19334152858 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-2d59d707b4892080ee65e19334152858 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-2d59d707b4892080ee65e19334152858 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
Jul  7 16:26:59.588: INFO: stdout: "Created e2e-test-httpd-rc-2d59d707b4892080ee65e19334152858\nScaling up e2e-test-httpd-rc-2d59d707b4892080ee65e19334152858 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-2d59d707b4892080ee65e19334152858 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-2d59d707b4892080ee65e19334152858 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up.
Jul  7 16:26:59.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-2171'
Jul  7 16:26:59.779: INFO: stderr: ""
Jul  7 16:26:59.779: INFO: stdout: "e2e-test-httpd-rc-2d59d707b4892080ee65e19334152858-x9cqx "
Jul  7 16:26:59.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-2d59d707b4892080ee65e19334152858-x9cqx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2171'
Jul  7 16:26:59.992: INFO: stderr: ""
Jul  7 16:26:59.992: INFO: stdout: "true"
Jul  7 16:26:59.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-2d59d707b4892080ee65e19334152858-x9cqx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2171'
Jul  7 16:27:00.086: INFO: stderr: ""
Jul  7 16:27:00.086: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine"
Jul  7 16:27:00.086: INFO: e2e-test-httpd-rc-2d59d707b4892080ee65e19334152858-x9cqx is verified up and running
[AfterEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1591
Jul  7 16:27:00.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-2171'
Jul  7 16:27:00.222: INFO: stderr: ""
Jul  7 16:27:00.222: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:27:00.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2171" for this suite.

• [SLOW TEST:20.300 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1580
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image  [Conformance]","total":278,"completed":164,"skipped":2951,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:27:00.389: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating api versions
Jul  7 16:27:01.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jul  7 16:27:02.965: INFO: stderr: ""
Jul  7 16:27:02.965: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:27:02.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3016" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":278,"completed":165,"skipped":2962,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:27:02.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
Jul  7 16:27:14.204: INFO: Successfully updated pod "adopt-release-5q2v5"
STEP: Checking that the Job readopts the Pod
Jul  7 16:27:14.204: INFO: Waiting up to 15m0s for pod "adopt-release-5q2v5" in namespace "job-5848" to be "adopted"
Jul  7 16:27:14.267: INFO: Pod "adopt-release-5q2v5": Phase="Running", Reason="", readiness=true. Elapsed: 62.795462ms
Jul  7 16:27:16.272: INFO: Pod "adopt-release-5q2v5": Phase="Running", Reason="", readiness=true. Elapsed: 2.067521268s
Jul  7 16:27:16.272: INFO: Pod "adopt-release-5q2v5" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
Jul  7 16:27:16.781: INFO: Successfully updated pod "adopt-release-5q2v5"
STEP: Checking that the Job releases the Pod
Jul  7 16:27:16.781: INFO: Waiting up to 15m0s for pod "adopt-release-5q2v5" in namespace "job-5848" to be "released"
Jul  7 16:27:16.848: INFO: Pod "adopt-release-5q2v5": Phase="Running", Reason="", readiness=true. Elapsed: 66.955773ms
Jul  7 16:27:16.848: INFO: Pod "adopt-release-5q2v5" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:27:16.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5848" for this suite.

• [SLOW TEST:14.483 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":166,"skipped":2970,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:27:17.459: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service endpoint-test2 in namespace services-4602
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4602 to expose endpoints map[]
Jul  7 16:27:18.365: INFO: Get endpoints failed (87.341047ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jul  7 16:27:19.597: INFO: successfully validated that service endpoint-test2 in namespace services-4602 exposes endpoints map[] (1.319062245s elapsed)
STEP: Creating pod pod1 in namespace services-4602
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4602 to expose endpoints map[pod1:[80]]
Jul  7 16:27:25.664: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (5.80615692s elapsed, will retry)
Jul  7 16:27:28.056: INFO: successfully validated that service endpoint-test2 in namespace services-4602 exposes endpoints map[pod1:[80]] (8.198167619s elapsed)
STEP: Creating pod pod2 in namespace services-4602
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4602 to expose endpoints map[pod1:[80] pod2:[80]]
Jul  7 16:27:33.622: INFO: Unexpected endpoints: found map[f56d80a4-afbf-4dcc-b4ee-0db8916136e2:[80]], expected map[pod1:[80] pod2:[80]] (5.396412751s elapsed, will retry)
Jul  7 16:27:36.218: INFO: successfully validated that service endpoint-test2 in namespace services-4602 exposes endpoints map[pod1:[80] pod2:[80]] (7.992482797s elapsed)
STEP: Deleting pod pod1 in namespace services-4602
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4602 to expose endpoints map[pod2:[80]]
Jul  7 16:27:38.458: INFO: successfully validated that service endpoint-test2 in namespace services-4602 exposes endpoints map[pod2:[80]] (2.235284069s elapsed)
STEP: Deleting pod pod2 in namespace services-4602
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4602 to expose endpoints map[]
Jul  7 16:27:39.412: INFO: successfully validated that service endpoint-test2 in namespace services-4602 exposes endpoints map[] (370.168581ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:27:39.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4602" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:22.871 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":278,"completed":167,"skipped":3032,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:27:40.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul  7 16:27:42.882: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dc92aeb1-b17f-4b32-8bcb-fd48e8bc9257" in namespace "projected-1821" to be "success or failure"
Jul  7 16:27:43.257: INFO: Pod "downwardapi-volume-dc92aeb1-b17f-4b32-8bcb-fd48e8bc9257": Phase="Pending", Reason="", readiness=false. Elapsed: 375.091667ms
Jul  7 16:27:45.502: INFO: Pod "downwardapi-volume-dc92aeb1-b17f-4b32-8bcb-fd48e8bc9257": Phase="Pending", Reason="", readiness=false. Elapsed: 2.620237365s
Jul  7 16:27:47.532: INFO: Pod "downwardapi-volume-dc92aeb1-b17f-4b32-8bcb-fd48e8bc9257": Phase="Pending", Reason="", readiness=false. Elapsed: 4.649521532s
Jul  7 16:27:50.058: INFO: Pod "downwardapi-volume-dc92aeb1-b17f-4b32-8bcb-fd48e8bc9257": Phase="Pending", Reason="", readiness=false. Elapsed: 7.175629746s
Jul  7 16:27:52.526: INFO: Pod "downwardapi-volume-dc92aeb1-b17f-4b32-8bcb-fd48e8bc9257": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.643747829s
STEP: Saw pod success
Jul  7 16:27:52.526: INFO: Pod "downwardapi-volume-dc92aeb1-b17f-4b32-8bcb-fd48e8bc9257" satisfied condition "success or failure"
Jul  7 16:27:52.529: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-dc92aeb1-b17f-4b32-8bcb-fd48e8bc9257 container client-container: 
STEP: delete the pod
Jul  7 16:27:53.802: INFO: Waiting for pod downwardapi-volume-dc92aeb1-b17f-4b32-8bcb-fd48e8bc9257 to disappear
Jul  7 16:27:53.856: INFO: Pod downwardapi-volume-dc92aeb1-b17f-4b32-8bcb-fd48e8bc9257 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:27:53.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1821" for this suite.

• [SLOW TEST:13.982 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":168,"skipped":3057,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:27:54.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-f79add4f-e459-4c42-8606-46e6bf8b94aa in namespace container-probe-1117
Jul  7 16:28:04.507: INFO: Started pod liveness-f79add4f-e459-4c42-8606-46e6bf8b94aa in namespace container-probe-1117
STEP: checking the pod's current state and verifying that restartCount is present
Jul  7 16:28:04.510: INFO: Initial restart count of pod liveness-f79add4f-e459-4c42-8606-46e6bf8b94aa is 0
Jul  7 16:28:20.621: INFO: Restart count of pod container-probe-1117/liveness-f79add4f-e459-4c42-8606-46e6bf8b94aa is now 1 (16.111076251s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:28:20.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1117" for this suite.

• [SLOW TEST:27.145 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":169,"skipped":3093,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:28:21.460: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod test-webserver-43b7df00-b6bd-4d36-b78a-2787426e8e71 in namespace container-probe-8060
Jul  7 16:28:28.659: INFO: Started pod test-webserver-43b7df00-b6bd-4d36-b78a-2787426e8e71 in namespace container-probe-8060
STEP: checking the pod's current state and verifying that restartCount is present
Jul  7 16:28:28.663: INFO: Initial restart count of pod test-webserver-43b7df00-b6bd-4d36-b78a-2787426e8e71 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:32:30.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8060" for this suite.

• [SLOW TEST:249.023 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":170,"skipped":3136,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:32:30.483: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Jul  7 16:32:30.806: INFO: Waiting up to 5m0s for pod "pod-d0076e42-c8b7-4f1f-bf8b-3e25b676148d" in namespace "emptydir-4270" to be "success or failure"
Jul  7 16:32:30.882: INFO: Pod "pod-d0076e42-c8b7-4f1f-bf8b-3e25b676148d": Phase="Pending", Reason="", readiness=false. Elapsed: 75.665027ms
Jul  7 16:32:33.055: INFO: Pod "pod-d0076e42-c8b7-4f1f-bf8b-3e25b676148d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.24937776s
Jul  7 16:32:35.416: INFO: Pod "pod-d0076e42-c8b7-4f1f-bf8b-3e25b676148d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.610244966s
Jul  7 16:32:37.420: INFO: Pod "pod-d0076e42-c8b7-4f1f-bf8b-3e25b676148d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.614348308s
STEP: Saw pod success
Jul  7 16:32:37.420: INFO: Pod "pod-d0076e42-c8b7-4f1f-bf8b-3e25b676148d" satisfied condition "success or failure"
Jul  7 16:32:37.423: INFO: Trying to get logs from node jerma-worker2 pod pod-d0076e42-c8b7-4f1f-bf8b-3e25b676148d container test-container: 
STEP: delete the pod
Jul  7 16:32:37.704: INFO: Waiting for pod pod-d0076e42-c8b7-4f1f-bf8b-3e25b676148d to disappear
Jul  7 16:32:37.726: INFO: Pod pod-d0076e42-c8b7-4f1f-bf8b-3e25b676148d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:32:37.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4270" for this suite.

• [SLOW TEST:7.287 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":171,"skipped":3152,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:32:37.770: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  7 16:32:37.885: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:32:38.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3492" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":278,"completed":172,"skipped":3163,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:32:38.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:32:54.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9060" for this suite.

• [SLOW TEST:16.324 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":173,"skipped":3170,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:32:54.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul  7 16:32:55.117: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c4a27ede-e8c8-4963-8a84-164fa25c031a" in namespace "downward-api-114" to be "success or failure"
Jul  7 16:32:55.159: INFO: Pod "downwardapi-volume-c4a27ede-e8c8-4963-8a84-164fa25c031a": Phase="Pending", Reason="", readiness=false. Elapsed: 41.248418ms
Jul  7 16:32:57.314: INFO: Pod "downwardapi-volume-c4a27ede-e8c8-4963-8a84-164fa25c031a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196348437s
Jul  7 16:32:59.639: INFO: Pod "downwardapi-volume-c4a27ede-e8c8-4963-8a84-164fa25c031a": Phase="Running", Reason="", readiness=true. Elapsed: 4.521421393s
Jul  7 16:33:01.781: INFO: Pod "downwardapi-volume-c4a27ede-e8c8-4963-8a84-164fa25c031a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.663955588s
STEP: Saw pod success
Jul  7 16:33:01.781: INFO: Pod "downwardapi-volume-c4a27ede-e8c8-4963-8a84-164fa25c031a" satisfied condition "success or failure"
Jul  7 16:33:01.786: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-c4a27ede-e8c8-4963-8a84-164fa25c031a container client-container: 
STEP: delete the pod
Jul  7 16:33:02.035: INFO: Waiting for pod downwardapi-volume-c4a27ede-e8c8-4963-8a84-164fa25c031a to disappear
Jul  7 16:33:02.362: INFO: Pod downwardapi-volume-c4a27ede-e8c8-4963-8a84-164fa25c031a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:33:02.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-114" for this suite.

• [SLOW TEST:7.599 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":174,"skipped":3177,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:33:02.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
Jul  7 16:33:03.795: INFO: created pod pod-service-account-defaultsa
Jul  7 16:33:03.795: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jul  7 16:33:03.834: INFO: created pod pod-service-account-mountsa
Jul  7 16:33:03.834: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jul  7 16:33:03.859: INFO: created pod pod-service-account-nomountsa
Jul  7 16:33:03.859: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jul  7 16:33:03.900: INFO: created pod pod-service-account-defaultsa-mountspec
Jul  7 16:33:03.900: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jul  7 16:33:03.912: INFO: created pod pod-service-account-mountsa-mountspec
Jul  7 16:33:03.912: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jul  7 16:33:04.008: INFO: created pod pod-service-account-nomountsa-mountspec
Jul  7 16:33:04.008: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jul  7 16:33:04.052: INFO: created pod pod-service-account-defaultsa-nomountspec
Jul  7 16:33:04.052: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jul  7 16:33:04.135: INFO: created pod pod-service-account-mountsa-nomountspec
Jul  7 16:33:04.135: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jul  7 16:33:04.196: INFO: created pod pod-service-account-nomountsa-nomountspec
Jul  7 16:33:04.196: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:33:04.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-8015" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":278,"completed":175,"skipped":3183,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:33:04.355: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: executing a command with run --rm and attach with stdin
Jul  7 16:33:04.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5318 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jul  7 16:33:39.222: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0707 16:33:39.107043    2088 log.go:172] (0xc000a90000) (0xc0006a05a0) Create stream\nI0707 16:33:39.107075    2088 log.go:172] (0xc000a90000) (0xc0006a05a0) Stream added, broadcasting: 1\nI0707 16:33:39.108400    2088 log.go:172] (0xc000a90000) Reply frame received for 1\nI0707 16:33:39.108429    2088 log.go:172] (0xc000a90000) (0xc0006dbae0) Create stream\nI0707 16:33:39.108436    2088 log.go:172] (0xc000a90000) (0xc0006dbae0) Stream added, broadcasting: 3\nI0707 16:33:39.109036    2088 log.go:172] (0xc000a90000) Reply frame received for 3\nI0707 16:33:39.109062    2088 log.go:172] (0xc000a90000) (0xc000758000) Create stream\nI0707 16:33:39.109072    2088 log.go:172] (0xc000a90000) (0xc000758000) Stream added, broadcasting: 5\nI0707 16:33:39.110003    2088 log.go:172] (0xc000a90000) Reply frame received for 5\nI0707 16:33:39.110046    2088 log.go:172] (0xc000a90000) (0xc000018000) Create stream\nI0707 16:33:39.110057    2088 log.go:172] (0xc000a90000) (0xc000018000) Stream added, broadcasting: 7\nI0707 16:33:39.110719    2088 log.go:172] (0xc000a90000) Reply frame received for 7\nI0707 16:33:39.110823    2088 log.go:172] (0xc0006dbae0) (3) Writing data frame\nI0707 16:33:39.110944    2088 log.go:172] (0xc0006dbae0) (3) Writing data frame\nI0707 16:33:39.111507    2088 log.go:172] (0xc000a90000) Data frame received for 5\nI0707 16:33:39.111528    2088 log.go:172] (0xc000758000) (5) Data frame handling\nI0707 16:33:39.111545    2088 log.go:172] (0xc000758000) (5) Data frame sent\nI0707 16:33:39.112069    2088 log.go:172] (0xc000a90000) Data frame received for 5\nI0707 16:33:39.112081    2088 log.go:172] (0xc000758000) (5) Data frame handling\nI0707 16:33:39.112093    2088 log.go:172] (0xc000758000) (5) Data frame sent\nI0707 16:33:39.151210    2088 log.go:172] (0xc000a90000) Data frame received for 5\nI0707 16:33:39.151256    2088 log.go:172] (0xc000758000) (5) Data frame handling\nI0707 16:33:39.151602    2088 log.go:172] (0xc000a90000) Data frame received for 7\nI0707 16:33:39.151647    2088 log.go:172] (0xc000018000) (7) Data frame handling\nI0707 16:33:39.151740    2088 log.go:172] (0xc000a90000) Data frame received for 1\nI0707 16:33:39.151804    2088 log.go:172] (0xc0006a05a0) (1) Data frame handling\nI0707 16:33:39.151815    2088 log.go:172] (0xc0006a05a0) (1) Data frame sent\nI0707 16:33:39.151826    2088 log.go:172] (0xc000a90000) (0xc0006a05a0) Stream removed, broadcasting: 1\nI0707 16:33:39.151991    2088 log.go:172] (0xc000a90000) (0xc0006dbae0) Stream removed, broadcasting: 3\nI0707 16:33:39.152142    2088 log.go:172] (0xc000a90000) (0xc0006a05a0) Stream removed, broadcasting: 1\nI0707 16:33:39.152158    2088 log.go:172] (0xc000a90000) (0xc0006dbae0) Stream removed, broadcasting: 3\nI0707 16:33:39.152167    2088 log.go:172] (0xc000a90000) (0xc000758000) Stream removed, broadcasting: 5\nI0707 16:33:39.152285    2088 log.go:172] (0xc000a90000) (0xc000018000) Stream removed, broadcasting: 7\nI0707 16:33:39.152351    2088 log.go:172] (0xc000a90000) Go away received\n"
Jul  7 16:33:39.222: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:33:41.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5318" for this suite.

• [SLOW TEST:36.941 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job  [Conformance]","total":278,"completed":176,"skipped":3197,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:33:41.296: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:33:57.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8922" for this suite.

• [SLOW TEST:16.679 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":177,"skipped":3218,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:33:57.975: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  7 16:33:59.660: INFO: (0) /api/v1/nodes/jerma-worker/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jul  7 16:34:13.956: INFO: Successfully updated pod "pod-update-activedeadlineseconds-348afa3d-6477-4d0a-91d3-e27843febb26"
Jul  7 16:34:13.956: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-348afa3d-6477-4d0a-91d3-e27843febb26" in namespace "pods-7689" to be "terminated due to deadline exceeded"
Jul  7 16:34:14.375: INFO: Pod "pod-update-activedeadlineseconds-348afa3d-6477-4d0a-91d3-e27843febb26": Phase="Running", Reason="", readiness=true. Elapsed: 419.341469ms
Jul  7 16:34:16.543: INFO: Pod "pod-update-activedeadlineseconds-348afa3d-6477-4d0a-91d3-e27843febb26": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.5872964s
Jul  7 16:34:16.543: INFO: Pod "pod-update-activedeadlineseconds-348afa3d-6477-4d0a-91d3-e27843febb26" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:34:16.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7689" for this suite.

• [SLOW TEST:15.561 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":179,"skipped":3232,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:34:16.553: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-cc65d9b9-aa2e-4bf3-a55b-bbe4165ffc76
STEP: Creating a pod to test consume secrets
Jul  7 16:34:17.654: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e401e355-22b2-4104-bfad-5c1b73833b2a" in namespace "projected-9042" to be "success or failure"
Jul  7 16:34:17.666: INFO: Pod "pod-projected-secrets-e401e355-22b2-4104-bfad-5c1b73833b2a": Phase="Pending", Reason="", readiness=false. Elapsed: 11.899612ms
Jul  7 16:34:19.872: INFO: Pod "pod-projected-secrets-e401e355-22b2-4104-bfad-5c1b73833b2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218326085s
Jul  7 16:34:21.894: INFO: Pod "pod-projected-secrets-e401e355-22b2-4104-bfad-5c1b73833b2a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.239473919s
Jul  7 16:34:23.927: INFO: Pod "pod-projected-secrets-e401e355-22b2-4104-bfad-5c1b73833b2a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.272975656s
Jul  7 16:34:25.948: INFO: Pod "pod-projected-secrets-e401e355-22b2-4104-bfad-5c1b73833b2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.29337374s
STEP: Saw pod success
Jul  7 16:34:25.948: INFO: Pod "pod-projected-secrets-e401e355-22b2-4104-bfad-5c1b73833b2a" satisfied condition "success or failure"
Jul  7 16:34:25.951: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-e401e355-22b2-4104-bfad-5c1b73833b2a container projected-secret-volume-test: 
STEP: delete the pod
Jul  7 16:34:26.388: INFO: Waiting for pod pod-projected-secrets-e401e355-22b2-4104-bfad-5c1b73833b2a to disappear
Jul  7 16:34:26.391: INFO: Pod pod-projected-secrets-e401e355-22b2-4104-bfad-5c1b73833b2a no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:34:26.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9042" for this suite.

• [SLOW TEST:9.845 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":3246,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:34:26.398: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-404e1f80-0ce6-4327-9a64-ce4ed458c9ad
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:34:41.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3248" for this suite.

• [SLOW TEST:14.669 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":181,"skipped":3251,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:34:41.067: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Jul  7 16:34:41.616: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:34:59.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2740" for this suite.

• [SLOW TEST:18.694 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":182,"skipped":3257,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:34:59.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jul  7 16:35:05.560: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:35:05.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4959" for this suite.

• [SLOW TEST:5.953 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":183,"skipped":3280,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:35:05.715: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  7 16:35:10.694: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  7 16:35:12.944: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729736510, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729736510, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729736510, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729736509, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 16:35:14.988: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729736510, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729736510, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729736510, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729736509, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 16:35:16.948: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729736510, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729736510, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729736510, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729736509, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  7 16:35:20.760: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:35:32.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8589" for this suite.
STEP: Destroying namespace "webhook-8589-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:26.618 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":184,"skipped":3283,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:35:32.334: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Jul  7 16:35:38.491: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-6450 PodName:pod-sharedvolume-e9e6992d-3700-40b8-b906-06ecaa3c4082 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 16:35:38.491: INFO: >>> kubeConfig: /root/.kube/config
I0707 16:35:38.517000       6 log.go:172] (0xc004ea4580) (0xc001a6b180) Create stream
I0707 16:35:38.517045       6 log.go:172] (0xc004ea4580) (0xc001a6b180) Stream added, broadcasting: 1
I0707 16:35:38.519738       6 log.go:172] (0xc004ea4580) Reply frame received for 1
I0707 16:35:38.519789       6 log.go:172] (0xc004ea4580) (0xc000a82820) Create stream
I0707 16:35:38.519822       6 log.go:172] (0xc004ea4580) (0xc000a82820) Stream added, broadcasting: 3
I0707 16:35:38.520626       6 log.go:172] (0xc004ea4580) Reply frame received for 3
I0707 16:35:38.520657       6 log.go:172] (0xc004ea4580) (0xc001a6b2c0) Create stream
I0707 16:35:38.520669       6 log.go:172] (0xc004ea4580) (0xc001a6b2c0) Stream added, broadcasting: 5
I0707 16:35:38.521659       6 log.go:172] (0xc004ea4580) Reply frame received for 5
I0707 16:35:38.579853       6 log.go:172] (0xc004ea4580) Data frame received for 3
I0707 16:35:38.579893       6 log.go:172] (0xc000a82820) (3) Data frame handling
I0707 16:35:38.579906       6 log.go:172] (0xc000a82820) (3) Data frame sent
I0707 16:35:38.579918       6 log.go:172] (0xc004ea4580) Data frame received for 3
I0707 16:35:38.579925       6 log.go:172] (0xc000a82820) (3) Data frame handling
I0707 16:35:38.579944       6 log.go:172] (0xc004ea4580) Data frame received for 5
I0707 16:35:38.579952       6 log.go:172] (0xc001a6b2c0) (5) Data frame handling
I0707 16:35:38.581083       6 log.go:172] (0xc004ea4580) Data frame received for 1
I0707 16:35:38.581283       6 log.go:172] (0xc001a6b180) (1) Data frame handling
I0707 16:35:38.581323       6 log.go:172] (0xc001a6b180) (1) Data frame sent
I0707 16:35:38.581340       6 log.go:172] (0xc004ea4580) (0xc001a6b180) Stream removed, broadcasting: 1
I0707 16:35:38.581359       6 log.go:172] (0xc004ea4580) Go away received
I0707 16:35:38.581556       6 log.go:172] (0xc004ea4580) (0xc001a6b180) Stream removed, broadcasting: 1
I0707 16:35:38.581575       6 log.go:172] (0xc004ea4580) (0xc000a82820) Stream removed, broadcasting: 3
I0707 16:35:38.581583       6 log.go:172] (0xc004ea4580) (0xc001a6b2c0) Stream removed, broadcasting: 5
Jul  7 16:35:38.581: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:35:38.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6450" for this suite.

• [SLOW TEST:6.254 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":185,"skipped":3288,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:35:38.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jul  7 16:35:38.654: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul  7 16:35:38.719: INFO: Waiting for terminating namespaces to be deleted...
Jul  7 16:35:38.721: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Jul  7 16:35:38.735: INFO: kube-proxy-8sp85 from kube-system started at 2020-07-04 07:51:00 +0000 UTC (1 container statuses recorded)
Jul  7 16:35:38.735: INFO: 	Container kube-proxy ready: true, restart count 0
Jul  7 16:35:38.735: INFO: pod-sharedvolume-e9e6992d-3700-40b8-b906-06ecaa3c4082 from emptydir-6450 started at 2020-07-07 16:35:32 +0000 UTC (2 container statuses recorded)
Jul  7 16:35:38.735: INFO: 	Container busybox-main-container ready: true, restart count 0
Jul  7 16:35:38.735: INFO: 	Container busybox-sub-container ready: false, restart count 0
Jul  7 16:35:38.735: INFO: kindnet-gnxwn from kube-system started at 2020-07-04 07:51:00 +0000 UTC (1 container statuses recorded)
Jul  7 16:35:38.735: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul  7 16:35:38.735: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Jul  7 16:35:38.740: INFO: kindnet-qg8qr from kube-system started at 2020-07-04 07:51:01 +0000 UTC (1 container statuses recorded)
Jul  7 16:35:38.740: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul  7 16:35:38.740: INFO: kube-proxy-b2ncl from kube-system started at 2020-07-04 07:51:01 +0000 UTC (1 container statuses recorded)
Jul  7 16:35:38.740: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-baebd7df-1652-4602-9fd4-c2690556802d 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-baebd7df-1652-4602-9fd4-c2690556802d off the node jerma-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-baebd7df-1652-4602-9fd4-c2690556802d
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:35:59.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5109" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:21.249 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":186,"skipped":3306,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:35:59.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[BeforeEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1626
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jul  7 16:36:00.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-8856'
Jul  7 16:36:00.115: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul  7 16:36:00.115: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the deployment e2e-test-httpd-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created
[AfterEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1631
Jul  7 16:36:04.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-8856'
Jul  7 16:36:04.664: INFO: stderr: ""
Jul  7 16:36:04.664: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:36:04.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8856" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image  [Conformance]","total":278,"completed":187,"skipped":3339,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:36:04.673: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  7 16:36:04.947: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jul  7 16:36:05.020: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 16:36:05.150: INFO: Number of nodes with available pods: 0
Jul  7 16:36:05.150: INFO: Node jerma-worker is running more than one daemon pod
Jul  7 16:36:06.276: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 16:36:06.449: INFO: Number of nodes with available pods: 0
Jul  7 16:36:06.449: INFO: Node jerma-worker is running more than one daemon pod
Jul  7 16:36:07.154: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 16:36:07.162: INFO: Number of nodes with available pods: 0
Jul  7 16:36:07.162: INFO: Node jerma-worker is running more than one daemon pod
Jul  7 16:36:08.360: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 16:36:08.364: INFO: Number of nodes with available pods: 0
Jul  7 16:36:08.364: INFO: Node jerma-worker is running more than one daemon pod
Jul  7 16:36:09.155: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 16:36:09.158: INFO: Number of nodes with available pods: 0
Jul  7 16:36:09.158: INFO: Node jerma-worker is running more than one daemon pod
Jul  7 16:36:10.178: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 16:36:10.271: INFO: Number of nodes with available pods: 0
Jul  7 16:36:10.271: INFO: Node jerma-worker is running more than one daemon pod
Jul  7 16:36:11.218: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 16:36:11.295: INFO: Number of nodes with available pods: 2
Jul  7 16:36:11.295: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jul  7 16:36:12.611: INFO: Wrong image for pod: daemon-set-rjhmz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  7 16:36:12.611: INFO: Wrong image for pod: daemon-set-t6sl4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  7 16:36:12.615: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 16:36:13.760: INFO: Wrong image for pod: daemon-set-rjhmz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  7 16:36:13.761: INFO: Wrong image for pod: daemon-set-t6sl4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  7 16:36:13.764: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 16:36:14.620: INFO: Wrong image for pod: daemon-set-rjhmz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  7 16:36:14.620: INFO: Wrong image for pod: daemon-set-t6sl4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  7 16:36:14.623: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 16:36:15.620: INFO: Wrong image for pod: daemon-set-rjhmz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  7 16:36:15.620: INFO: Pod daemon-set-rjhmz is not available
Jul  7 16:36:15.620: INFO: Wrong image for pod: daemon-set-t6sl4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  7 16:36:15.624: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 16:36:16.619: INFO: Wrong image for pod: daemon-set-rjhmz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  7 16:36:16.619: INFO: Pod daemon-set-rjhmz is not available
Jul  7 16:36:16.619: INFO: Wrong image for pod: daemon-set-t6sl4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  7 16:36:16.624: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 16:36:17.619: INFO: Wrong image for pod: daemon-set-rjhmz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  7 16:36:17.619: INFO: Pod daemon-set-rjhmz is not available
Jul  7 16:36:17.619: INFO: Wrong image for pod: daemon-set-t6sl4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  7 16:36:17.622: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 16:36:18.619: INFO: Wrong image for pod: daemon-set-rjhmz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  7 16:36:18.619: INFO: Pod daemon-set-rjhmz is not available
Jul  7 16:36:18.619: INFO: Wrong image for pod: daemon-set-t6sl4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  7 16:36:18.623: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 16:36:19.619: INFO: Wrong image for pod: daemon-set-rjhmz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  7 16:36:19.619: INFO: Pod daemon-set-rjhmz is not available
Jul  7 16:36:19.619: INFO: Wrong image for pod: daemon-set-t6sl4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  7 16:36:19.623: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 16:36:20.620: INFO: Wrong image for pod: daemon-set-rjhmz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  7 16:36:20.620: INFO: Pod daemon-set-rjhmz is not available
Jul  7 16:36:20.620: INFO: Wrong image for pod: daemon-set-t6sl4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  7 16:36:20.624: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 16:36:21.647: INFO: Wrong image for pod: daemon-set-rjhmz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  7 16:36:21.647: INFO: Pod daemon-set-rjhmz is not available
Jul  7 16:36:21.647: INFO: Wrong image for pod: daemon-set-t6sl4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  7 16:36:21.651: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 16:36:22.619: INFO: Wrong image for pod: daemon-set-rjhmz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  7 16:36:22.619: INFO: Pod daemon-set-rjhmz is not available
Jul  7 16:36:22.619: INFO: Wrong image for pod: daemon-set-t6sl4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  7 16:36:22.623: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 16:36:23.619: INFO: Wrong image for pod: daemon-set-rjhmz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  7 16:36:23.619: INFO: Pod daemon-set-rjhmz is not available
Jul  7 16:36:23.619: INFO: Wrong image for pod: daemon-set-t6sl4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  7 16:36:23.624: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 16:36:24.618: INFO: Wrong image for pod: daemon-set-rjhmz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  7 16:36:24.618: INFO: Pod daemon-set-rjhmz is not available
Jul  7 16:36:24.618: INFO: Wrong image for pod: daemon-set-t6sl4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  7 16:36:24.620: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 16:36:25.695: INFO: Wrong image for pod: daemon-set-rjhmz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  7 16:36:25.695: INFO: Pod daemon-set-rjhmz is not available
Jul  7 16:36:25.695: INFO: Wrong image for pod: daemon-set-t6sl4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  7 16:36:25.699: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 16:36:26.790: INFO: Pod daemon-set-rdxdm is not available
Jul  7 16:36:26.790: INFO: Wrong image for pod: daemon-set-t6sl4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  7 16:36:26.977: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 16:36:27.632: INFO: Pod daemon-set-rdxdm is not available
Jul  7 16:36:27.632: INFO: Wrong image for pod: daemon-set-t6sl4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  7 16:36:27.664: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 16:36:28.619: INFO: Pod daemon-set-rdxdm is not available
Jul  7 16:36:28.619: INFO: Wrong image for pod: daemon-set-t6sl4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  7 16:36:28.623: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 16:36:29.778: INFO: Pod daemon-set-rdxdm is not available
Jul  7 16:36:29.778: INFO: Wrong image for pod: daemon-set-t6sl4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  7 16:36:29.838: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 16:36:30.620: INFO: Pod daemon-set-rdxdm is not available
Jul  7 16:36:30.620: INFO: Wrong image for pod: daemon-set-t6sl4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  7 16:36:30.624: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 16:36:31.620: INFO: Pod daemon-set-rdxdm is not available
Jul  7 16:36:31.620: INFO: Wrong image for pod: daemon-set-t6sl4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  7 16:36:31.624: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 16:36:32.620: INFO: Wrong image for pod: daemon-set-t6sl4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  7 16:36:32.623: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 16:36:33.619: INFO: Wrong image for pod: daemon-set-t6sl4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  7 16:36:33.623: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 16:36:34.625: INFO: Wrong image for pod: daemon-set-t6sl4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jul  7 16:36:34.625: INFO: Pod daemon-set-t6sl4 is not available
Jul  7 16:36:34.628: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 16:36:35.620: INFO: Pod daemon-set-dklgl is not available
Jul  7 16:36:35.623: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Jul  7 16:36:35.627: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 16:36:35.629: INFO: Number of nodes with available pods: 1
Jul  7 16:36:35.629: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  7 16:36:36.695: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 16:36:36.698: INFO: Number of nodes with available pods: 1
Jul  7 16:36:36.698: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  7 16:36:37.634: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 16:36:37.636: INFO: Number of nodes with available pods: 1
Jul  7 16:36:37.636: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  7 16:36:38.638: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 16:36:38.649: INFO: Number of nodes with available pods: 1
Jul  7 16:36:38.649: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  7 16:36:39.635: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 16:36:39.638: INFO: Number of nodes with available pods: 2
Jul  7 16:36:39.638: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2195, will wait for the garbage collector to delete the pods
Jul  7 16:36:39.714: INFO: Deleting DaemonSet.extensions daemon-set took: 10.327536ms
Jul  7 16:36:40.014: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.206564ms
Jul  7 16:36:46.980: INFO: Number of nodes with available pods: 0
Jul  7 16:36:46.980: INFO: Number of running nodes: 0, number of available pods: 0
Jul  7 16:36:46.983: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2195/daemonsets","resourceVersion":"945855"},"items":null}

Jul  7 16:36:47.045: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2195/pods","resourceVersion":"945856"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:36:47.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2195" for this suite.

• [SLOW TEST:42.447 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":188,"skipped":3347,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:36:47.121: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  7 16:36:47.451: INFO: Creating deployment "webserver-deployment"
Jul  7 16:36:47.587: INFO: Waiting for observed generation 1
Jul  7 16:36:50.156: INFO: Waiting for all required pods to come up
Jul  7 16:36:50.162: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Jul  7 16:37:06.170: INFO: Waiting for deployment "webserver-deployment" to complete
Jul  7 16:37:06.176: INFO: Updating deployment "webserver-deployment" with a non-existent image
Jul  7 16:37:06.181: INFO: Updating deployment webserver-deployment
Jul  7 16:37:06.181: INFO: Waiting for observed generation 2
Jul  7 16:37:09.165: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jul  7 16:37:10.255: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jul  7 16:37:11.233: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Jul  7 16:37:11.632: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jul  7 16:37:11.632: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jul  7 16:37:11.635: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Jul  7 16:37:12.159: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Jul  7 16:37:12.159: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Jul  7 16:37:12.756: INFO: Updating deployment webserver-deployment
Jul  7 16:37:12.756: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Jul  7 16:37:13.432: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jul  7 16:37:16.708: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Jul  7 16:37:17.727: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-8638 /apis/apps/v1/namespaces/deployment-8638/deployments/webserver-deployment 05f69f05-01fb-454f-8f19-3428670fd413 946203 3 2020-07-07 16:36:47 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0041e4998  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-07-07 16:37:13 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-07-07 16:37:14 +0000 UTC,LastTransitionTime:2020-07-07 16:36:47 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Jul  7 16:37:18.612: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8  deployment-8638 /apis/apps/v1/namespaces/deployment-8638/replicasets/webserver-deployment-c7997dcc8 42f4d30c-ee9c-44a0-97b9-2a5d2454e2e3 946198 3 2020-07-07 16:37:06 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 05f69f05-01fb-454f-8f19-3428670fd413 0xc0041e4e67 0xc0041e4e68}] []  []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0041e4ed8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jul  7 16:37:18.612: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Jul  7 16:37:18.612: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587  deployment-8638 /apis/apps/v1/namespaces/deployment-8638/replicasets/webserver-deployment-595b5b9587 44575cba-a86e-4cf6-9979-ab600101f8e8 946182 3 2020-07-07 16:36:47 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 05f69f05-01fb-454f-8f19-3428670fd413 0xc0041e4da7 0xc0041e4da8}] []  []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0041e4e08  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Jul  7 16:37:18.921: INFO: Pod "webserver-deployment-595b5b9587-24fjq" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-24fjq webserver-deployment-595b5b9587- deployment-8638 /api/v1/namespaces/deployment-8638/pods/webserver-deployment-595b5b9587-24fjq 04bb2b7f-7aec-486f-a611-93115d97eedb 946012 0 2020-07-07 16:36:47 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44575cba-a86e-4cf6-9979-ab600101f8e8 0xc0041e5397 0xc0041e5398}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vmpnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vmpnm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vmpnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:36:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:36:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.66,StartTime:2020-07-07 16:36:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-07 16:37:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a6a619e3e96067eb73d8b36cf8384fd240d9441351660dbe5da31cedbf15b0eb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.66,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  7 16:37:18.922: INFO: Pod "webserver-deployment-595b5b9587-5kdgh" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-5kdgh webserver-deployment-595b5b9587- deployment-8638 /api/v1/namespaces/deployment-8638/pods/webserver-deployment-595b5b9587-5kdgh f03a8b28-5d3d-4426-b746-be80100e4431 946037 0 2020-07-07 16:36:47 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44575cba-a86e-4cf6-9979-ab600101f8e8 0xc0041e5517 0xc0041e5518}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vmpnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vmpnm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vmpnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:36:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:36:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.67,StartTime:2020-07-07 16:36:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-07 16:37:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b3b126edc4aaf8e98666d7a57d25c37b3f4f8c0ed68870c2a496e8d8fdd9ef99,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.67,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  7 16:37:18.922: INFO: Pod "webserver-deployment-595b5b9587-88ghs" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-88ghs webserver-deployment-595b5b9587- deployment-8638 /api/v1/namespaces/deployment-8638/pods/webserver-deployment-595b5b9587-88ghs ac6747f7-f721-49b9-9259-20889bbbc65d 946235 0 2020-07-07 16:37:13 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44575cba-a86e-4cf6-9979-ab600101f8e8 0xc0041e5697 0xc0041e5698}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vmpnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vmpnm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vmpnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-07-07 16:37:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  7 16:37:18.922: INFO: Pod "webserver-deployment-595b5b9587-9bffb" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-9bffb webserver-deployment-595b5b9587- deployment-8638 /api/v1/namespaces/deployment-8638/pods/webserver-deployment-595b5b9587-9bffb f1af7682-ec9c-4ec9-a700-0d24071664ce 946176 0 2020-07-07 16:37:13 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44575cba-a86e-4cf6-9979-ab600101f8e8 0xc0041e57f0 0xc0041e57f1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vmpnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vmpnm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vmpnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  7 16:37:18.922: INFO: Pod "webserver-deployment-595b5b9587-b2jc9" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-b2jc9 webserver-deployment-595b5b9587- deployment-8638 /api/v1/namespaces/deployment-8638/pods/webserver-deployment-595b5b9587-b2jc9 147aa039-2cf2-4ea4-a180-b041a78821ee 946253 0 2020-07-07 16:37:13 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44575cba-a86e-4cf6-9979-ab600101f8e8 0xc0041e5900 0xc0041e5901}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vmpnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vmpnm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vmpnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-07-07 16:37:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  7 16:37:18.922: INFO: Pod "webserver-deployment-595b5b9587-d6sqz" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-d6sqz webserver-deployment-595b5b9587- deployment-8638 /api/v1/namespaces/deployment-8638/pods/webserver-deployment-595b5b9587-d6sqz 25fa928b-2a85-4cbe-a8a5-891bc3432b3f 946177 0 2020-07-07 16:37:13 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44575cba-a86e-4cf6-9979-ab600101f8e8 0xc0041e5a57 0xc0041e5a58}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vmpnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vmpnm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vmpnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  7 16:37:18.922: INFO: Pod "webserver-deployment-595b5b9587-dzf48" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-dzf48 webserver-deployment-595b5b9587- deployment-8638 /api/v1/namespaces/deployment-8638/pods/webserver-deployment-595b5b9587-dzf48 1e2e9b7f-961b-4b77-b1c9-da010981b520 946225 0 2020-07-07 16:37:13 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44575cba-a86e-4cf6-9979-ab600101f8e8 0xc0041e5b70 0xc0041e5b71}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vmpnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vmpnm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vmpnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-07-07 16:37:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  7 16:37:18.923: INFO: Pod "webserver-deployment-595b5b9587-gbksj" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-gbksj webserver-deployment-595b5b9587- deployment-8638 /api/v1/namespaces/deployment-8638/pods/webserver-deployment-595b5b9587-gbksj 50cea703-3bc9-4e2e-90ee-0c8851c44903 946204 0 2020-07-07 16:37:13 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44575cba-a86e-4cf6-9979-ab600101f8e8 0xc0041e5cc7 0xc0041e5cc8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vmpnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vmpnm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vmpnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-07-07 16:37:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  7 16:37:18.923: INFO: Pod "webserver-deployment-595b5b9587-ghmh6" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-ghmh6 webserver-deployment-595b5b9587- deployment-8638 /api/v1/namespaces/deployment-8638/pods/webserver-deployment-595b5b9587-ghmh6 fb45620f-8e4c-4542-882b-f168bad8d18d 945991 0 2020-07-07 16:36:47 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44575cba-a86e-4cf6-9979-ab600101f8e8 0xc0041e5e20 0xc0041e5e21}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vmpnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vmpnm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vmpnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:36:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:36:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.65,StartTime:2020-07-07 16:36:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-07 16:37:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4ed41da9579941752814aa6198fd69bde0d5a0758c8880cc7e9f443f53f44d97,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.65,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  7 16:37:18.923: INFO: Pod "webserver-deployment-595b5b9587-hb9h9" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-hb9h9 webserver-deployment-595b5b9587- deployment-8638 /api/v1/namespaces/deployment-8638/pods/webserver-deployment-595b5b9587-hb9h9 46e445af-2b57-4590-8509-6aeab9187d7e 946226 0 2020-07-07 16:37:13 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44575cba-a86e-4cf6-9979-ab600101f8e8 0xc0041e5f97 0xc0041e5f98}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vmpnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vmpnm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vmpnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-07-07 16:37:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  7 16:37:18.923: INFO: Pod "webserver-deployment-595b5b9587-m44ht" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-m44ht webserver-deployment-595b5b9587- deployment-8638 /api/v1/namespaces/deployment-8638/pods/webserver-deployment-595b5b9587-m44ht 549eaaf1-4c95-4c81-a9e7-5f0bd4c333c0 946178 0 2020-07-07 16:37:13 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44575cba-a86e-4cf6-9979-ab600101f8e8 0xc0030d20f0 0xc0030d20f1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vmpnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vmpnm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vmpnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  7 16:37:18.923: INFO: Pod "webserver-deployment-595b5b9587-ml2gz" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-ml2gz webserver-deployment-595b5b9587- deployment-8638 /api/v1/namespaces/deployment-8638/pods/webserver-deployment-595b5b9587-ml2gz 02566a21-cd2c-4302-9a4b-3d686b58dc7e 946040 0 2020-07-07 16:36:47 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44575cba-a86e-4cf6-9979-ab600101f8e8 0xc0030d2200 0xc0030d2201}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vmpnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vmpnm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vmpnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:36:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:36:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.68,StartTime:2020-07-07 16:36:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-07 16:37:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a9bf32fd65e8f1a76024ef890bf1a1941df672aabe2f55ab6efcb934053c88ab,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.68,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  7 16:37:18.923: INFO: Pod "webserver-deployment-595b5b9587-ptrdk" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-ptrdk webserver-deployment-595b5b9587- deployment-8638 /api/v1/namespaces/deployment-8638/pods/webserver-deployment-595b5b9587-ptrdk 01fbf0ee-84d1-4f4f-a4d8-e49917edac1e 946181 0 2020-07-07 16:37:13 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44575cba-a86e-4cf6-9979-ab600101f8e8 0xc0030d2377 0xc0030d2378}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vmpnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vmpnm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vmpnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-07-07 16:37:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  7 16:37:18.924: INFO: Pod "webserver-deployment-595b5b9587-qvbg6" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-qvbg6 webserver-deployment-595b5b9587- deployment-8638 /api/v1/namespaces/deployment-8638/pods/webserver-deployment-595b5b9587-qvbg6 9a739ebe-2332-4002-b95a-c2204d10c169 946193 0 2020-07-07 16:37:13 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44575cba-a86e-4cf6-9979-ab600101f8e8 0xc0030d24d0 0xc0030d24d1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vmpnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vmpnm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vmpnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-07-07 16:37:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  7 16:37:18.924: INFO: Pod "webserver-deployment-595b5b9587-rnqw2" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-rnqw2 webserver-deployment-595b5b9587- deployment-8638 /api/v1/namespaces/deployment-8638/pods/webserver-deployment-595b5b9587-rnqw2 b8f549dd-47c3-431b-a518-35054cc4eace 945995 0 2020-07-07 16:36:47 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44575cba-a86e-4cf6-9979-ab600101f8e8 0xc0030d2620 0xc0030d2621}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vmpnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vmpnm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vmpnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:36:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:36:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.58,StartTime:2020-07-07 16:36:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-07 16:37:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ea8e29f41fd2d90097ddf913bbe850efd1cbb529ccada6236b4fe2d12461ff8b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.58,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  7 16:37:18.924: INFO: Pod "webserver-deployment-595b5b9587-rnx7p" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-rnx7p webserver-deployment-595b5b9587- deployment-8638 /api/v1/namespaces/deployment-8638/pods/webserver-deployment-595b5b9587-rnx7p 7a32bfde-4840-4f95-9e08-29d64c6501b0 945976 0 2020-07-07 16:36:47 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44575cba-a86e-4cf6-9979-ab600101f8e8 0xc0030d2790 0xc0030d2791}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vmpnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vmpnm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vmpnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:36:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:36:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:36:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:36:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.64,StartTime:2020-07-07 16:36:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-07 16:36:58 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e7926523a9ff76b2fb981587c6d6c12680a723a7b261114ebadc7b9f8c94543d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.64,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  7 16:37:18.924: INFO: Pod "webserver-deployment-595b5b9587-sttm9" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-sttm9 webserver-deployment-595b5b9587- deployment-8638 /api/v1/namespaces/deployment-8638/pods/webserver-deployment-595b5b9587-sttm9 f9396af5-4a5d-4414-95df-6c4badf33b95 946218 0 2020-07-07 16:37:13 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44575cba-a86e-4cf6-9979-ab600101f8e8 0xc0030d2917 0xc0030d2918}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vmpnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vmpnm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vmpnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-07-07 16:37:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  7 16:37:18.924: INFO: Pod "webserver-deployment-595b5b9587-w42qw" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-w42qw webserver-deployment-595b5b9587- deployment-8638 /api/v1/namespaces/deployment-8638/pods/webserver-deployment-595b5b9587-w42qw 55fb4631-18f7-43bd-8b1d-ffae0b7fd7bb 945979 0 2020-07-07 16:36:47 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44575cba-a86e-4cf6-9979-ab600101f8e8 0xc0030d2a77 0xc0030d2a78}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vmpnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vmpnm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vmpnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:36:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:36:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:36:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:36:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.57,StartTime:2020-07-07 16:36:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-07 16:36:58 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7076469176a8584fbd346c0b57010987418ec2844ad936b4eb9157ba13dc850c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.57,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  7 16:37:18.925: INFO: Pod "webserver-deployment-595b5b9587-wzn4x" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-wzn4x webserver-deployment-595b5b9587- deployment-8638 /api/v1/namespaces/deployment-8638/pods/webserver-deployment-595b5b9587-wzn4x 289715bd-2e34-487d-9553-f651280ed826 945955 0 2020-07-07 16:36:47 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44575cba-a86e-4cf6-9979-ab600101f8e8 0xc0030d2bf0 0xc0030d2bf1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vmpnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vmpnm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vmpnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:36:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:36:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:36:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:36:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.56,StartTime:2020-07-07 16:36:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-07 16:36:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://020ba45eb31b0e2fdc36bbfb23726e757284242fa8d32dadf7835df3b08fdc37,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.56,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  7 16:37:18.925: INFO: Pod "webserver-deployment-595b5b9587-xclwl" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-xclwl webserver-deployment-595b5b9587- deployment-8638 /api/v1/namespaces/deployment-8638/pods/webserver-deployment-595b5b9587-xclwl a5238a93-0745-4c83-8643-49be472d49a4 946175 0 2020-07-07 16:37:13 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44575cba-a86e-4cf6-9979-ab600101f8e8 0xc0030d2d60 0xc0030d2d61}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vmpnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vmpnm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vmpnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  7 16:37:18.925: INFO: Pod "webserver-deployment-c7997dcc8-289xj" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-289xj webserver-deployment-c7997dcc8- deployment-8638 /api/v1/namespaces/deployment-8638/pods/webserver-deployment-c7997dcc8-289xj 41a78db1-b784-4851-a9dc-0e4774b3a6ac 946171 0 2020-07-07 16:37:13 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 42f4d30c-ee9c-44a0-97b9-2a5d2454e2e3 0xc0030d2e70 0xc0030d2e71}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vmpnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vmpnm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vmpnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  7 16:37:18.925: INFO: Pod "webserver-deployment-c7997dcc8-4xqpw" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4xqpw webserver-deployment-c7997dcc8- deployment-8638 /api/v1/namespaces/deployment-8638/pods/webserver-deployment-c7997dcc8-4xqpw 428ed4c8-dfc1-43d8-90a2-2c749526b15d 946172 0 2020-07-07 16:37:13 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 42f4d30c-ee9c-44a0-97b9-2a5d2454e2e3 0xc0030d2f90 0xc0030d2f91}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vmpnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vmpnm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vmpnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  7 16:37:18.925: INFO: Pod "webserver-deployment-c7997dcc8-74nmv" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-74nmv webserver-deployment-c7997dcc8- deployment-8638 /api/v1/namespaces/deployment-8638/pods/webserver-deployment-c7997dcc8-74nmv d4817212-9add-4944-9e9f-21dac34c1cda 946078 0 2020-07-07 16:37:06 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 42f4d30c-ee9c-44a0-97b9-2a5d2454e2e3 0xc0030d30b0 0xc0030d30b1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vmpnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vmpnm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vmpnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-07-07 16:37:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  7 16:37:18.926: INFO: Pod "webserver-deployment-c7997dcc8-9jslr" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9jslr webserver-deployment-c7997dcc8- deployment-8638 /api/v1/namespaces/deployment-8638/pods/webserver-deployment-c7997dcc8-9jslr bb418afe-0542-45fc-9321-9aa4957f1e38 946139 0 2020-07-07 16:37:06 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 42f4d30c-ee9c-44a0-97b9-2a5d2454e2e3 0xc0030d3230 0xc0030d3231}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vmpnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vmpnm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vmpnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.62,StartTime:2020-07-07 16:37:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.62,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  7 16:37:18.926: INFO: Pod "webserver-deployment-c7997dcc8-dp882" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dp882 webserver-deployment-c7997dcc8- deployment-8638 /api/v1/namespaces/deployment-8638/pods/webserver-deployment-c7997dcc8-dp882 1a9c91ad-1b3c-4827-9395-5ea13fb0e640 946113 0 2020-07-07 16:37:07 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 42f4d30c-ee9c-44a0-97b9-2a5d2454e2e3 0xc0030d33e0 0xc0030d33e1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vmpnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vmpnm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vmpnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-07-07 16:37:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  7 16:37:18.926: INFO: Pod "webserver-deployment-c7997dcc8-hmcfj" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hmcfj webserver-deployment-c7997dcc8- deployment-8638 /api/v1/namespaces/deployment-8638/pods/webserver-deployment-c7997dcc8-hmcfj 312369a1-12a6-407f-97a1-d2c6172b95b6 946207 0 2020-07-07 16:37:13 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 42f4d30c-ee9c-44a0-97b9-2a5d2454e2e3 0xc0030d3550 0xc0030d3551}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vmpnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vmpnm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vmpnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-07-07 16:37:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  7 16:37:18.926: INFO: Pod "webserver-deployment-c7997dcc8-hxbcr" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hxbcr webserver-deployment-c7997dcc8- deployment-8638 /api/v1/namespaces/deployment-8638/pods/webserver-deployment-c7997dcc8-hxbcr 012a424d-daa9-4e56-aead-77f7ee23cf38 946244 0 2020-07-07 16:37:13 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 42f4d30c-ee9c-44a0-97b9-2a5d2454e2e3 0xc0030d36c0 0xc0030d36c1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vmpnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vmpnm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vmpnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-07-07 16:37:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  7 16:37:18.926: INFO: Pod "webserver-deployment-c7997dcc8-hxdx7" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hxdx7 webserver-deployment-c7997dcc8- deployment-8638 /api/v1/namespaces/deployment-8638/pods/webserver-deployment-c7997dcc8-hxdx7 7d3ea2e9-8624-421e-8317-5cbb7b72cd9e 946243 0 2020-07-07 16:37:07 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 42f4d30c-ee9c-44a0-97b9-2a5d2454e2e3 0xc0030d3830 0xc0030d3831}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vmpnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vmpnm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vmpnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.70,StartTime:2020-07-07 16:37:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.70,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  7 16:37:18.926: INFO: Pod "webserver-deployment-c7997dcc8-hxhn8" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hxhn8 webserver-deployment-c7997dcc8- deployment-8638 /api/v1/namespaces/deployment-8638/pods/webserver-deployment-c7997dcc8-hxhn8 f0e53476-beaa-4803-b96b-702b2b699513 946195 0 2020-07-07 16:37:06 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 42f4d30c-ee9c-44a0-97b9-2a5d2454e2e3 0xc0030d39d0 0xc0030d39d1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vmpnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vmpnm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vmpnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.69,StartTime:2020-07-07 16:37:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.69,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  7 16:37:18.927: INFO: Pod "webserver-deployment-c7997dcc8-j68cj" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-j68cj webserver-deployment-c7997dcc8- deployment-8638 /api/v1/namespaces/deployment-8638/pods/webserver-deployment-c7997dcc8-j68cj 71dfe5f8-cb80-4591-8aec-0fe9cbc7880b 946219 0 2020-07-07 16:37:13 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 42f4d30c-ee9c-44a0-97b9-2a5d2454e2e3 0xc0030d3b70 0xc0030d3b71}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vmpnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vmpnm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vmpnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-07-07 16:37:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  7 16:37:18.927: INFO: Pod "webserver-deployment-c7997dcc8-rdswl" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rdswl webserver-deployment-c7997dcc8- deployment-8638 /api/v1/namespaces/deployment-8638/pods/webserver-deployment-c7997dcc8-rdswl 6e760e7a-5868-4b74-a401-748612d28199 946236 0 2020-07-07 16:37:13 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 42f4d30c-ee9c-44a0-97b9-2a5d2454e2e3 0xc0030d3ce0 0xc0030d3ce1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vmpnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vmpnm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vmpnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-07-07 16:37:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  7 16:37:18.927: INFO: Pod "webserver-deployment-c7997dcc8-z2sb7" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-z2sb7 webserver-deployment-c7997dcc8- deployment-8638 /api/v1/namespaces/deployment-8638/pods/webserver-deployment-c7997dcc8-z2sb7 e1dfe054-1f86-4ce3-92df-00581d4285cd 946188 0 2020-07-07 16:37:13 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 42f4d30c-ee9c-44a0-97b9-2a5d2454e2e3 0xc0030d3e50 0xc0030d3e51}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vmpnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vmpnm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vmpnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-07-07 16:37:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul  7 16:37:18.927: INFO: Pod "webserver-deployment-c7997dcc8-z855k" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-z855k webserver-deployment-c7997dcc8- deployment-8638 /api/v1/namespaces/deployment-8638/pods/webserver-deployment-c7997dcc8-z855k 303b2278-60f1-4b0a-896a-6813d4592a2b 946184 0 2020-07-07 16:37:13 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 42f4d30c-ee9c-44a0-97b9-2a5d2454e2e3 0xc0030d3fc0 0xc0030d3fc1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vmpnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vmpnm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vmpnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:37:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:37:18.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8638" for this suite.

• [SLOW TEST:32.812 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":189,"skipped":3375,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:37:19.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-projected-8bqd
STEP: Creating a pod to test atomic-volume-subpath
Jul  7 16:37:21.737: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-8bqd" in namespace "subpath-3565" to be "success or failure"
Jul  7 16:37:21.783: INFO: Pod "pod-subpath-test-projected-8bqd": Phase="Pending", Reason="", readiness=false. Elapsed: 45.514599ms
Jul  7 16:37:25.117: INFO: Pod "pod-subpath-test-projected-8bqd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.37966006s
Jul  7 16:37:27.678: INFO: Pod "pod-subpath-test-projected-8bqd": Phase="Pending", Reason="", readiness=false. Elapsed: 5.940922117s
Jul  7 16:37:29.704: INFO: Pod "pod-subpath-test-projected-8bqd": Phase="Pending", Reason="", readiness=false. Elapsed: 7.966936251s
Jul  7 16:37:31.727: INFO: Pod "pod-subpath-test-projected-8bqd": Phase="Pending", Reason="", readiness=false. Elapsed: 9.989425694s
Jul  7 16:37:34.380: INFO: Pod "pod-subpath-test-projected-8bqd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.642920833s
Jul  7 16:37:36.751: INFO: Pod "pod-subpath-test-projected-8bqd": Phase="Pending", Reason="", readiness=false. Elapsed: 15.014052388s
Jul  7 16:37:38.755: INFO: Pod "pod-subpath-test-projected-8bqd": Phase="Pending", Reason="", readiness=false. Elapsed: 17.017354785s
Jul  7 16:37:41.144: INFO: Pod "pod-subpath-test-projected-8bqd": Phase="Running", Reason="", readiness=true. Elapsed: 19.406232054s
Jul  7 16:37:43.558: INFO: Pod "pod-subpath-test-projected-8bqd": Phase="Running", Reason="", readiness=true. Elapsed: 21.820693247s
Jul  7 16:37:45.983: INFO: Pod "pod-subpath-test-projected-8bqd": Phase="Running", Reason="", readiness=true. Elapsed: 24.245865502s
Jul  7 16:37:48.517: INFO: Pod "pod-subpath-test-projected-8bqd": Phase="Running", Reason="", readiness=true. Elapsed: 26.779465181s
Jul  7 16:37:51.038: INFO: Pod "pod-subpath-test-projected-8bqd": Phase="Running", Reason="", readiness=true. Elapsed: 29.300384557s
Jul  7 16:37:53.050: INFO: Pod "pod-subpath-test-projected-8bqd": Phase="Running", Reason="", readiness=true. Elapsed: 31.312948033s
Jul  7 16:37:55.108: INFO: Pod "pod-subpath-test-projected-8bqd": Phase="Running", Reason="", readiness=true. Elapsed: 33.370695933s
Jul  7 16:37:57.193: INFO: Pod "pod-subpath-test-projected-8bqd": Phase="Running", Reason="", readiness=true. Elapsed: 35.455664642s
Jul  7 16:37:59.326: INFO: Pod "pod-subpath-test-projected-8bqd": Phase="Running", Reason="", readiness=true. Elapsed: 37.588337384s
Jul  7 16:38:01.343: INFO: Pod "pod-subpath-test-projected-8bqd": Phase="Running", Reason="", readiness=true. Elapsed: 39.605448769s
Jul  7 16:38:03.347: INFO: Pod "pod-subpath-test-projected-8bqd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 41.609251572s
STEP: Saw pod success
Jul  7 16:38:03.347: INFO: Pod "pod-subpath-test-projected-8bqd" satisfied condition "success or failure"
Jul  7 16:38:03.349: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-projected-8bqd container test-container-subpath-projected-8bqd: 
STEP: delete the pod
Jul  7 16:38:03.602: INFO: Waiting for pod pod-subpath-test-projected-8bqd to disappear
Jul  7 16:38:03.667: INFO: Pod pod-subpath-test-projected-8bqd no longer exists
STEP: Deleting pod pod-subpath-test-projected-8bqd
Jul  7 16:38:03.667: INFO: Deleting pod "pod-subpath-test-projected-8bqd" in namespace "subpath-3565"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:38:03.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3565" for this suite.

• [SLOW TEST:43.743 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":190,"skipped":3375,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:38:03.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Starting the proxy
Jul  7 16:38:04.079: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix639401703/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:38:04.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1128" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":278,"completed":191,"skipped":3383,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:38:04.298: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  7 16:38:05.308: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-4e9491a9-82e8-45aa-95c7-0e24e5e87232" in namespace "security-context-test-9507" to be "success or failure"
Jul  7 16:38:05.330: INFO: Pod "busybox-privileged-false-4e9491a9-82e8-45aa-95c7-0e24e5e87232": Phase="Pending", Reason="", readiness=false. Elapsed: 21.424427ms
Jul  7 16:38:07.540: INFO: Pod "busybox-privileged-false-4e9491a9-82e8-45aa-95c7-0e24e5e87232": Phase="Pending", Reason="", readiness=false. Elapsed: 2.232249897s
Jul  7 16:38:09.953: INFO: Pod "busybox-privileged-false-4e9491a9-82e8-45aa-95c7-0e24e5e87232": Phase="Pending", Reason="", readiness=false. Elapsed: 4.644622869s
Jul  7 16:38:12.013: INFO: Pod "busybox-privileged-false-4e9491a9-82e8-45aa-95c7-0e24e5e87232": Phase="Pending", Reason="", readiness=false. Elapsed: 6.705218417s
Jul  7 16:38:14.031: INFO: Pod "busybox-privileged-false-4e9491a9-82e8-45aa-95c7-0e24e5e87232": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.723216039s
Jul  7 16:38:14.031: INFO: Pod "busybox-privileged-false-4e9491a9-82e8-45aa-95c7-0e24e5e87232" satisfied condition "success or failure"
Jul  7 16:38:14.037: INFO: Got logs for pod "busybox-privileged-false-4e9491a9-82e8-45aa-95c7-0e24e5e87232": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:38:14.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9507" for this suite.

• [SLOW TEST:9.745 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a pod with privileged
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:225
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":3385,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:38:14.043: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  7 16:38:14.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9267'
Jul  7 16:38:14.828: INFO: stderr: ""
Jul  7 16:38:14.829: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Jul  7 16:38:14.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9267'
Jul  7 16:38:15.580: INFO: stderr: ""
Jul  7 16:38:15.580: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jul  7 16:38:16.586: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  7 16:38:16.586: INFO: Found 0 / 1
Jul  7 16:38:17.606: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  7 16:38:17.606: INFO: Found 0 / 1
Jul  7 16:38:18.597: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  7 16:38:18.597: INFO: Found 1 / 1
Jul  7 16:38:18.597: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jul  7 16:38:18.612: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  7 16:38:18.612: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jul  7 16:38:18.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-rqk69 --namespace=kubectl-9267'
Jul  7 16:38:18.731: INFO: stderr: ""
Jul  7 16:38:18.731: INFO: stdout: "Name:         agnhost-master-rqk69\nNamespace:    kubectl-9267\nPriority:     0\nNode:         jerma-worker/172.17.0.10\nStart Time:   Tue, 07 Jul 2020 16:38:14 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.244.1.81\nIPs:\n  IP:           10.244.1.81\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   containerd://d9d9cc5689bc04d8b91199b7bf786ce0b710df4488e98fb22f972754aa70448d\n    Image:          gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Image ID:       gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Tue, 07 Jul 2020 16:38:17 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-gscjg (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-gscjg:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-gscjg\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                   Message\n  ----    ------     ----  ----                   -------\n  Normal  Scheduled  4s    default-scheduler      Successfully assigned kubectl-9267/agnhost-master-rqk69 to jerma-worker\n  Normal  Pulled     2s    kubelet, jerma-worker  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n  Normal  Created    1s    kubelet, jerma-worker  Created container agnhost-master\n  Normal  Started    1s    kubelet, jerma-worker  Started container agnhost-master\n"
Jul  7 16:38:18.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-9267'
Jul  7 16:38:18.833: INFO: stderr: ""
Jul  7 16:38:18.833: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-9267\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  4s    replication-controller  Created pod: agnhost-master-rqk69\n"
Jul  7 16:38:18.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-9267'
Jul  7 16:38:18.932: INFO: stderr: ""
Jul  7 16:38:18.932: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-9267\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.107.154.145\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.244.1.81:6379\nSession Affinity:  None\nEvents:            \n"
Jul  7 16:38:18.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane'
Jul  7 16:38:19.059: INFO: stderr: ""
Jul  7 16:38:19.060: INFO: stdout: "Name:               jerma-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=jerma-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 04 Jul 2020 07:50:20 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  jerma-control-plane\n  AcquireTime:     \n  RenewTime:       Tue, 07 Jul 2020 16:38:09 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Tue, 07 Jul 2020 16:33:28 +0000   Sat, 04 Jul 2020 07:50:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Tue, 07 Jul 2020 16:33:28 +0000   Sat, 04 Jul 2020 07:50:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Tue, 07 Jul 2020 16:33:28 +0000   Sat, 04 Jul 2020 07:50:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Tue, 07 Jul 2020 16:33:28 +0000   Sat, 04 Jul 2020 07:50:54 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.17.0.9\n  Hostname:    jerma-control-plane\nCapacity:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759892Ki\n  pods:               110\nAllocatable:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759892Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 38019c037cfd4087a82e4827871389a4\n  System UUID:                e9de5062-4fa9-4d0b-8ec1-e753d472da92\n  Boot ID:                    ca2aa731-f890-4956-92a1-ff8c7560d571\n  Kernel Version:             4.15.0-88-generic\n  OS Image:                   Ubuntu 19.10\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.3.3-14-g449e9269\n  Kubelet Version:            v1.17.5\n  Kube-Proxy Version:         v1.17.5\nPodCIDR:                      10.244.0.0/24\nPodCIDRs:                     10.244.0.0/24\nNon-terminated Pods:          (9 in total)\n  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---\n  kube-system                 coredns-6955765f44-pgl6s                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     3d8h\n  kube-system                 coredns-6955765f44-wm87j                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     3d8h\n  kube-system                 etcd-jerma-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3d8h\n  kube-system                 kindnet-8r2ht                                  100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      3d8h\n  kube-system                 kube-apiserver-jerma-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         3d8h\n  kube-system                 kube-controller-manager-jerma-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         3d8h\n  kube-system                 kube-proxy-c7j2b                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         3d8h\n  kube-system                 kube-scheduler-jerma-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         3d8h\n  local-path-storage          local-path-provisioner-58f6947c7-87vc8         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3d8h\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\nEvents:              \n"
Jul  7 16:38:19.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-9267'
Jul  7 16:38:19.200: INFO: stderr: ""
Jul  7 16:38:19.200: INFO: stdout: "Name:         kubectl-9267\nLabels:       e2e-framework=kubectl\n              e2e-run=9fc662a3-b8b7-4d59-9ca2-685b86f49e7e\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:38:19.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9267" for this suite.

• [SLOW TEST:5.164 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1047
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":278,"completed":193,"skipped":3406,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:38:19.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  7 16:38:19.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jul  7 16:38:19.499: INFO: stderr: ""
Jul  7 16:38:19.499: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.8\", GitCommit:\"35dc4cdc26cfcb6614059c4c6e836e5f0dc61dee\", GitTreeState:\"clean\", BuildDate:\"2020-07-07T14:56:24Z\", GoVersion:\"go1.13.11\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.5\", GitCommit:\"e0fccafd69541e3750d460ba0f9743b90336f24f\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T02:11:15Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:38:19.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8584" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":278,"completed":194,"skipped":3409,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:38:19.546: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Jul  7 16:38:19.614: INFO: >>> kubeConfig: /root/.kube/config
Jul  7 16:38:22.575: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:38:33.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-49" for this suite.

• [SLOW TEST:13.497 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":195,"skipped":3414,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:38:33.044: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Jul  7 16:38:33.161: INFO: Waiting up to 5m0s for pod "downward-api-4615dc2c-2f5c-460e-9f17-07fd1c0843bc" in namespace "downward-api-2767" to be "success or failure"
Jul  7 16:38:33.164: INFO: Pod "downward-api-4615dc2c-2f5c-460e-9f17-07fd1c0843bc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.462519ms
Jul  7 16:38:35.170: INFO: Pod "downward-api-4615dc2c-2f5c-460e-9f17-07fd1c0843bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009011333s
Jul  7 16:38:37.174: INFO: Pod "downward-api-4615dc2c-2f5c-460e-9f17-07fd1c0843bc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013568416s
Jul  7 16:38:39.487: INFO: Pod "downward-api-4615dc2c-2f5c-460e-9f17-07fd1c0843bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.326414156s
STEP: Saw pod success
Jul  7 16:38:39.487: INFO: Pod "downward-api-4615dc2c-2f5c-460e-9f17-07fd1c0843bc" satisfied condition "success or failure"
Jul  7 16:38:39.490: INFO: Trying to get logs from node jerma-worker2 pod downward-api-4615dc2c-2f5c-460e-9f17-07fd1c0843bc container dapi-container: 
STEP: delete the pod
Jul  7 16:38:39.712: INFO: Waiting for pod downward-api-4615dc2c-2f5c-460e-9f17-07fd1c0843bc to disappear
Jul  7 16:38:39.858: INFO: Pod downward-api-4615dc2c-2f5c-460e-9f17-07fd1c0843bc no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:38:39.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2767" for this suite.

• [SLOW TEST:6.823 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":196,"skipped":3425,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:38:39.867: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-gsbxw in namespace proxy-8345
I0707 16:38:40.134259       6 runners.go:189] Created replication controller with name: proxy-service-gsbxw, namespace: proxy-8345, replica count: 1
I0707 16:38:41.184905       6 runners.go:189] proxy-service-gsbxw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0707 16:38:42.185161       6 runners.go:189] proxy-service-gsbxw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0707 16:38:43.185414       6 runners.go:189] proxy-service-gsbxw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0707 16:38:44.185653       6 runners.go:189] proxy-service-gsbxw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0707 16:38:45.185916       6 runners.go:189] proxy-service-gsbxw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0707 16:38:46.186212       6 runners.go:189] proxy-service-gsbxw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0707 16:38:47.186520       6 runners.go:189] proxy-service-gsbxw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0707 16:38:48.186765       6 runners.go:189] proxy-service-gsbxw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0707 16:38:49.187029       6 runners.go:189] proxy-service-gsbxw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0707 16:38:50.187284       6 runners.go:189] proxy-service-gsbxw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0707 16:38:51.187539       6 runners.go:189] proxy-service-gsbxw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0707 16:38:52.187740       6 runners.go:189] proxy-service-gsbxw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0707 16:38:53.187954       6 runners.go:189] proxy-service-gsbxw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0707 16:38:54.188179       6 runners.go:189] proxy-service-gsbxw Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul  7 16:38:54.192: INFO: setup took 14.17634323s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jul  7 16:38:54.199: INFO: (0) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:162/proxy/: bar (200; 7.071623ms)
Jul  7 16:38:54.199: INFO: (0) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:162/proxy/: bar (200; 6.975993ms)
Jul  7 16:38:54.199: INFO: (0) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:160/proxy/: foo (200; 6.896968ms)
Jul  7 16:38:54.199: INFO: (0) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:160/proxy/: foo (200; 7.185098ms)
Jul  7 16:38:54.200: INFO: (0) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:1080/proxy/: test<... (200; 7.702762ms)
Jul  7 16:38:54.201: INFO: (0) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb/proxy/: test (200; 9.05056ms)
Jul  7 16:38:54.202: INFO: (0) /api/v1/namespaces/proxy-8345/services/proxy-service-gsbxw:portname2/proxy/: bar (200; 9.672722ms)
Jul  7 16:38:54.202: INFO: (0) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:1080/proxy/: ... (200; 9.739489ms)
Jul  7 16:38:54.202: INFO: (0) /api/v1/namespaces/proxy-8345/services/proxy-service-gsbxw:portname1/proxy/: foo (200; 9.636612ms)
Jul  7 16:38:54.203: INFO: (0) /api/v1/namespaces/proxy-8345/services/http:proxy-service-gsbxw:portname1/proxy/: foo (200; 10.163004ms)
Jul  7 16:38:54.203: INFO: (0) /api/v1/namespaces/proxy-8345/services/http:proxy-service-gsbxw:portname2/proxy/: bar (200; 10.167039ms)
Jul  7 16:38:54.207: INFO: (0) /api/v1/namespaces/proxy-8345/services/https:proxy-service-gsbxw:tlsportname1/proxy/: tls baz (200; 14.877231ms)
Jul  7 16:38:54.207: INFO: (0) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:460/proxy/: tls baz (200; 14.790636ms)
Jul  7 16:38:54.208: INFO: (0) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:462/proxy/: tls qux (200; 15.848572ms)
Jul  7 16:38:54.208: INFO: (0) /api/v1/namespaces/proxy-8345/services/https:proxy-service-gsbxw:tlsportname2/proxy/: tls qux (200; 16.112067ms)
Jul  7 16:38:54.209: INFO: (0) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:443/proxy/: ... (200; 4.382772ms)
Jul  7 16:38:54.214: INFO: (1) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:462/proxy/: tls qux (200; 4.555035ms)
Jul  7 16:38:54.214: INFO: (1) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb/proxy/: test (200; 4.57873ms)
Jul  7 16:38:54.214: INFO: (1) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:162/proxy/: bar (200; 4.553368ms)
Jul  7 16:38:54.214: INFO: (1) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:443/proxy/: test<... (200; 4.825276ms)
Jul  7 16:38:54.215: INFO: (1) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:460/proxy/: tls baz (200; 5.205115ms)
Jul  7 16:38:54.215: INFO: (1) /api/v1/namespaces/proxy-8345/services/https:proxy-service-gsbxw:tlsportname2/proxy/: tls qux (200; 5.3597ms)
Jul  7 16:38:54.215: INFO: (1) /api/v1/namespaces/proxy-8345/services/proxy-service-gsbxw:portname1/proxy/: foo (200; 5.584291ms)
Jul  7 16:38:54.215: INFO: (1) /api/v1/namespaces/proxy-8345/services/https:proxy-service-gsbxw:tlsportname1/proxy/: tls baz (200; 5.613716ms)
Jul  7 16:38:54.215: INFO: (1) /api/v1/namespaces/proxy-8345/services/http:proxy-service-gsbxw:portname2/proxy/: bar (200; 5.627453ms)
Jul  7 16:38:54.216: INFO: (1) /api/v1/namespaces/proxy-8345/services/proxy-service-gsbxw:portname2/proxy/: bar (200; 6.229427ms)
Jul  7 16:38:54.216: INFO: (1) /api/v1/namespaces/proxy-8345/services/http:proxy-service-gsbxw:portname1/proxy/: foo (200; 6.32301ms)
Jul  7 16:38:54.219: INFO: (2) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:1080/proxy/: test<... (200; 2.856163ms)
Jul  7 16:38:54.219: INFO: (2) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:162/proxy/: bar (200; 3.165174ms)
Jul  7 16:38:54.219: INFO: (2) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:460/proxy/: tls baz (200; 3.182365ms)
Jul  7 16:38:54.221: INFO: (2) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:443/proxy/: ... (200; 5.583731ms)
Jul  7 16:38:54.221: INFO: (2) /api/v1/namespaces/proxy-8345/services/http:proxy-service-gsbxw:portname1/proxy/: foo (200; 5.567959ms)
Jul  7 16:38:54.221: INFO: (2) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:462/proxy/: tls qux (200; 5.623084ms)
Jul  7 16:38:54.221: INFO: (2) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb/proxy/: test (200; 5.686274ms)
Jul  7 16:38:54.221: INFO: (2) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:160/proxy/: foo (200; 5.646595ms)
Jul  7 16:38:54.221: INFO: (2) /api/v1/namespaces/proxy-8345/services/proxy-service-gsbxw:portname2/proxy/: bar (200; 5.682023ms)
Jul  7 16:38:54.221: INFO: (2) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:162/proxy/: bar (200; 5.693671ms)
Jul  7 16:38:54.221: INFO: (2) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:160/proxy/: foo (200; 5.69847ms)
Jul  7 16:38:54.222: INFO: (2) /api/v1/namespaces/proxy-8345/services/https:proxy-service-gsbxw:tlsportname2/proxy/: tls qux (200; 6.527977ms)
Jul  7 16:38:54.223: INFO: (2) /api/v1/namespaces/proxy-8345/services/https:proxy-service-gsbxw:tlsportname1/proxy/: tls baz (200; 6.850006ms)
Jul  7 16:38:54.223: INFO: (2) /api/v1/namespaces/proxy-8345/services/http:proxy-service-gsbxw:portname2/proxy/: bar (200; 7.08183ms)
Jul  7 16:38:54.226: INFO: (3) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb/proxy/: test (200; 2.984703ms)
Jul  7 16:38:54.228: INFO: (3) /api/v1/namespaces/proxy-8345/services/http:proxy-service-gsbxw:portname1/proxy/: foo (200; 4.860446ms)
Jul  7 16:38:54.228: INFO: (3) /api/v1/namespaces/proxy-8345/services/https:proxy-service-gsbxw:tlsportname2/proxy/: tls qux (200; 4.849765ms)
Jul  7 16:38:54.228: INFO: (3) /api/v1/namespaces/proxy-8345/services/proxy-service-gsbxw:portname2/proxy/: bar (200; 4.890631ms)
Jul  7 16:38:54.228: INFO: (3) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:1080/proxy/: ... (200; 5.00892ms)
Jul  7 16:38:54.228: INFO: (3) /api/v1/namespaces/proxy-8345/services/http:proxy-service-gsbxw:portname2/proxy/: bar (200; 4.857847ms)
Jul  7 16:38:54.228: INFO: (3) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:1080/proxy/: test<... (200; 5.022303ms)
Jul  7 16:38:54.228: INFO: (3) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:460/proxy/: tls baz (200; 5.032362ms)
Jul  7 16:38:54.228: INFO: (3) /api/v1/namespaces/proxy-8345/services/proxy-service-gsbxw:portname1/proxy/: foo (200; 5.081278ms)
Jul  7 16:38:54.228: INFO: (3) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:162/proxy/: bar (200; 5.188038ms)
Jul  7 16:38:54.228: INFO: (3) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:162/proxy/: bar (200; 5.21546ms)
Jul  7 16:38:54.228: INFO: (3) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:160/proxy/: foo (200; 5.268646ms)
Jul  7 16:38:54.228: INFO: (3) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:462/proxy/: tls qux (200; 5.335086ms)
Jul  7 16:38:54.228: INFO: (3) /api/v1/namespaces/proxy-8345/services/https:proxy-service-gsbxw:tlsportname1/proxy/: tls baz (200; 5.368782ms)
Jul  7 16:38:54.228: INFO: (3) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:443/proxy/: ... (200; 3.457438ms)
Jul  7 16:38:54.232: INFO: (4) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:160/proxy/: foo (200; 4.024035ms)
Jul  7 16:38:54.233: INFO: (4) /api/v1/namespaces/proxy-8345/services/proxy-service-gsbxw:portname2/proxy/: bar (200; 4.142672ms)
Jul  7 16:38:54.233: INFO: (4) /api/v1/namespaces/proxy-8345/services/http:proxy-service-gsbxw:portname1/proxy/: foo (200; 3.703093ms)
Jul  7 16:38:54.233: INFO: (4) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:443/proxy/: test (200; 4.468908ms)
Jul  7 16:38:54.234: INFO: (4) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:462/proxy/: tls qux (200; 5.243456ms)
Jul  7 16:38:54.234: INFO: (4) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:162/proxy/: bar (200; 4.859711ms)
Jul  7 16:38:54.234: INFO: (4) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:1080/proxy/: test<... (200; 4.969784ms)
Jul  7 16:38:54.236: INFO: (5) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:162/proxy/: bar (200; 1.819793ms)
Jul  7 16:38:54.238: INFO: (5) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb/proxy/: test (200; 3.917449ms)
Jul  7 16:38:54.239: INFO: (5) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:160/proxy/: foo (200; 4.501192ms)
Jul  7 16:38:54.239: INFO: (5) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:162/proxy/: bar (200; 4.515925ms)
Jul  7 16:38:54.239: INFO: (5) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:1080/proxy/: ... (200; 4.507306ms)
Jul  7 16:38:54.239: INFO: (5) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:160/proxy/: foo (200; 4.57976ms)
Jul  7 16:38:54.239: INFO: (5) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:443/proxy/: test<... (200; 5.305935ms)
Jul  7 16:38:54.240: INFO: (5) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:460/proxy/: tls baz (200; 5.397164ms)
Jul  7 16:38:54.242: INFO: (5) /api/v1/namespaces/proxy-8345/services/http:proxy-service-gsbxw:portname1/proxy/: foo (200; 7.687199ms)
Jul  7 16:38:54.242: INFO: (5) /api/v1/namespaces/proxy-8345/services/proxy-service-gsbxw:portname2/proxy/: bar (200; 7.784929ms)
Jul  7 16:38:54.242: INFO: (5) /api/v1/namespaces/proxy-8345/services/proxy-service-gsbxw:portname1/proxy/: foo (200; 7.959865ms)
Jul  7 16:38:54.242: INFO: (5) /api/v1/namespaces/proxy-8345/services/http:proxy-service-gsbxw:portname2/proxy/: bar (200; 7.955537ms)
Jul  7 16:38:54.242: INFO: (5) /api/v1/namespaces/proxy-8345/services/https:proxy-service-gsbxw:tlsportname1/proxy/: tls baz (200; 7.928253ms)
Jul  7 16:38:54.243: INFO: (5) /api/v1/namespaces/proxy-8345/services/https:proxy-service-gsbxw:tlsportname2/proxy/: tls qux (200; 8.068566ms)
Jul  7 16:38:54.245: INFO: (6) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb/proxy/: test (200; 2.48952ms)
Jul  7 16:38:54.247: INFO: (6) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:1080/proxy/: ... (200; 4.610871ms)
Jul  7 16:38:54.247: INFO: (6) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:1080/proxy/: test<... (200; 4.579591ms)
Jul  7 16:38:54.247: INFO: (6) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:162/proxy/: bar (200; 4.614715ms)
Jul  7 16:38:54.247: INFO: (6) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:160/proxy/: foo (200; 4.831889ms)
Jul  7 16:38:54.247: INFO: (6) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:162/proxy/: bar (200; 4.906879ms)
Jul  7 16:38:54.248: INFO: (6) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:443/proxy/: test (200; 4.67385ms)
Jul  7 16:38:54.254: INFO: (7) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:462/proxy/: tls qux (200; 4.673945ms)
Jul  7 16:38:54.254: INFO: (7) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:160/proxy/: foo (200; 4.698537ms)
Jul  7 16:38:54.254: INFO: (7) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:160/proxy/: foo (200; 4.816088ms)
Jul  7 16:38:54.255: INFO: (7) /api/v1/namespaces/proxy-8345/services/https:proxy-service-gsbxw:tlsportname2/proxy/: tls qux (200; 5.444608ms)
Jul  7 16:38:54.255: INFO: (7) /api/v1/namespaces/proxy-8345/services/http:proxy-service-gsbxw:portname2/proxy/: bar (200; 5.691414ms)
Jul  7 16:38:54.255: INFO: (7) /api/v1/namespaces/proxy-8345/services/http:proxy-service-gsbxw:portname1/proxy/: foo (200; 5.71007ms)
Jul  7 16:38:54.255: INFO: (7) /api/v1/namespaces/proxy-8345/services/proxy-service-gsbxw:portname1/proxy/: foo (200; 5.794469ms)
Jul  7 16:38:54.255: INFO: (7) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:1080/proxy/: test<... (200; 5.755141ms)
Jul  7 16:38:54.255: INFO: (7) /api/v1/namespaces/proxy-8345/services/proxy-service-gsbxw:portname2/proxy/: bar (200; 5.776929ms)
Jul  7 16:38:54.255: INFO: (7) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:1080/proxy/: ... (200; 5.779675ms)
Jul  7 16:38:54.255: INFO: (7) /api/v1/namespaces/proxy-8345/services/https:proxy-service-gsbxw:tlsportname1/proxy/: tls baz (200; 6.085548ms)
Jul  7 16:38:54.261: INFO: (8) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:443/proxy/: test<... (200; 5.837561ms)
Jul  7 16:38:54.261: INFO: (8) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:1080/proxy/: ... (200; 5.85436ms)
Jul  7 16:38:54.261: INFO: (8) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:160/proxy/: foo (200; 5.954649ms)
Jul  7 16:38:54.262: INFO: (8) /api/v1/namespaces/proxy-8345/services/proxy-service-gsbxw:portname2/proxy/: bar (200; 6.016353ms)
Jul  7 16:38:54.262: INFO: (8) /api/v1/namespaces/proxy-8345/services/proxy-service-gsbxw:portname1/proxy/: foo (200; 6.116841ms)
Jul  7 16:38:54.262: INFO: (8) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:460/proxy/: tls baz (200; 6.044111ms)
Jul  7 16:38:54.262: INFO: (8) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:162/proxy/: bar (200; 6.115742ms)
Jul  7 16:38:54.262: INFO: (8) /api/v1/namespaces/proxy-8345/services/https:proxy-service-gsbxw:tlsportname1/proxy/: tls baz (200; 6.134247ms)
Jul  7 16:38:54.262: INFO: (8) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:162/proxy/: bar (200; 6.122916ms)
Jul  7 16:38:54.262: INFO: (8) /api/v1/namespaces/proxy-8345/services/https:proxy-service-gsbxw:tlsportname2/proxy/: tls qux (200; 6.136667ms)
Jul  7 16:38:54.262: INFO: (8) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb/proxy/: test (200; 6.176062ms)
Jul  7 16:38:54.262: INFO: (8) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:160/proxy/: foo (200; 6.465151ms)
Jul  7 16:38:54.262: INFO: (8) /api/v1/namespaces/proxy-8345/services/http:proxy-service-gsbxw:portname2/proxy/: bar (200; 6.415779ms)
Jul  7 16:38:54.310: INFO: (9) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:1080/proxy/: test<... (200; 48.076933ms)
Jul  7 16:38:54.310: INFO: (9) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:160/proxy/: foo (200; 48.217489ms)
Jul  7 16:38:54.310: INFO: (9) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:162/proxy/: bar (200; 48.156803ms)
Jul  7 16:38:54.310: INFO: (9) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb/proxy/: test (200; 48.162408ms)
Jul  7 16:38:54.310: INFO: (9) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:160/proxy/: foo (200; 48.200837ms)
Jul  7 16:38:54.310: INFO: (9) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:443/proxy/: ... (200; 48.405554ms)
Jul  7 16:38:54.311: INFO: (9) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:460/proxy/: tls baz (200; 48.428567ms)
Jul  7 16:38:54.311: INFO: (9) /api/v1/namespaces/proxy-8345/services/http:proxy-service-gsbxw:portname1/proxy/: foo (200; 48.858065ms)
Jul  7 16:38:54.311: INFO: (9) /api/v1/namespaces/proxy-8345/services/proxy-service-gsbxw:portname2/proxy/: bar (200; 49.379261ms)
Jul  7 16:38:54.311: INFO: (9) /api/v1/namespaces/proxy-8345/services/https:proxy-service-gsbxw:tlsportname1/proxy/: tls baz (200; 49.309137ms)
Jul  7 16:38:54.311: INFO: (9) /api/v1/namespaces/proxy-8345/services/http:proxy-service-gsbxw:portname2/proxy/: bar (200; 49.341225ms)
Jul  7 16:38:54.311: INFO: (9) /api/v1/namespaces/proxy-8345/services/proxy-service-gsbxw:portname1/proxy/: foo (200; 49.382744ms)
Jul  7 16:38:54.311: INFO: (9) /api/v1/namespaces/proxy-8345/services/https:proxy-service-gsbxw:tlsportname2/proxy/: tls qux (200; 49.38349ms)
Jul  7 16:38:54.369: INFO: (10) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:160/proxy/: foo (200; 57.760345ms)
Jul  7 16:38:54.369: INFO: (10) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:162/proxy/: bar (200; 57.759925ms)
Jul  7 16:38:54.370: INFO: (10) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:462/proxy/: tls qux (200; 57.862675ms)
Jul  7 16:38:54.370: INFO: (10) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:443/proxy/: test (200; 57.960718ms)
Jul  7 16:38:54.371: INFO: (10) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:1080/proxy/: test<... (200; 58.948378ms)
Jul  7 16:38:54.371: INFO: (10) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:1080/proxy/: ... (200; 59.067661ms)
Jul  7 16:38:54.371: INFO: (10) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:460/proxy/: tls baz (200; 59.135158ms)
Jul  7 16:38:54.371: INFO: (10) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:162/proxy/: bar (200; 59.030572ms)
Jul  7 16:38:54.371: INFO: (10) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:160/proxy/: foo (200; 59.130464ms)
Jul  7 16:38:54.371: INFO: (10) /api/v1/namespaces/proxy-8345/services/https:proxy-service-gsbxw:tlsportname2/proxy/: tls qux (200; 59.443719ms)
Jul  7 16:38:54.372: INFO: (10) /api/v1/namespaces/proxy-8345/services/proxy-service-gsbxw:portname2/proxy/: bar (200; 59.83891ms)
Jul  7 16:38:54.372: INFO: (10) /api/v1/namespaces/proxy-8345/services/proxy-service-gsbxw:portname1/proxy/: foo (200; 59.93576ms)
Jul  7 16:38:54.372: INFO: (10) /api/v1/namespaces/proxy-8345/services/http:proxy-service-gsbxw:portname1/proxy/: foo (200; 60.095357ms)
Jul  7 16:38:54.372: INFO: (10) /api/v1/namespaces/proxy-8345/services/http:proxy-service-gsbxw:portname2/proxy/: bar (200; 60.112076ms)
Jul  7 16:38:54.372: INFO: (10) /api/v1/namespaces/proxy-8345/services/https:proxy-service-gsbxw:tlsportname1/proxy/: tls baz (200; 60.354191ms)
Jul  7 16:38:54.376: INFO: (11) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:160/proxy/: foo (200; 4.228355ms)
Jul  7 16:38:54.378: INFO: (11) /api/v1/namespaces/proxy-8345/services/http:proxy-service-gsbxw:portname2/proxy/: bar (200; 5.52255ms)
Jul  7 16:38:54.378: INFO: (11) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:1080/proxy/: test<... (200; 5.515428ms)
Jul  7 16:38:54.378: INFO: (11) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:160/proxy/: foo (200; 5.660688ms)
Jul  7 16:38:54.378: INFO: (11) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb/proxy/: test (200; 5.655116ms)
Jul  7 16:38:54.378: INFO: (11) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:443/proxy/: ... (200; 6.771814ms)
Jul  7 16:38:54.379: INFO: (11) /api/v1/namespaces/proxy-8345/services/http:proxy-service-gsbxw:portname1/proxy/: foo (200; 6.847911ms)
Jul  7 16:38:54.379: INFO: (11) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:162/proxy/: bar (200; 6.863745ms)
Jul  7 16:38:54.379: INFO: (11) /api/v1/namespaces/proxy-8345/services/https:proxy-service-gsbxw:tlsportname2/proxy/: tls qux (200; 6.955016ms)
Jul  7 16:38:54.379: INFO: (11) /api/v1/namespaces/proxy-8345/services/proxy-service-gsbxw:portname2/proxy/: bar (200; 6.917952ms)
Jul  7 16:38:54.379: INFO: (11) /api/v1/namespaces/proxy-8345/services/proxy-service-gsbxw:portname1/proxy/: foo (200; 7.020688ms)
Jul  7 16:38:54.379: INFO: (11) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:462/proxy/: tls qux (200; 6.921387ms)
Jul  7 16:38:54.380: INFO: (11) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:460/proxy/: tls baz (200; 7.503218ms)
Jul  7 16:38:54.380: INFO: (11) /api/v1/namespaces/proxy-8345/services/https:proxy-service-gsbxw:tlsportname1/proxy/: tls baz (200; 7.451698ms)
Jul  7 16:38:54.380: INFO: (11) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:162/proxy/: bar (200; 7.536794ms)
Jul  7 16:38:54.383: INFO: (12) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:162/proxy/: bar (200; 3.605568ms)
Jul  7 16:38:54.384: INFO: (12) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:160/proxy/: foo (200; 4.139727ms)
Jul  7 16:38:54.384: INFO: (12) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:1080/proxy/: ... (200; 4.202082ms)
Jul  7 16:38:54.384: INFO: (12) /api/v1/namespaces/proxy-8345/services/proxy-service-gsbxw:portname2/proxy/: bar (200; 4.592828ms)
Jul  7 16:38:54.384: INFO: (12) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb/proxy/: test (200; 4.62048ms)
Jul  7 16:38:54.384: INFO: (12) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:443/proxy/: test<... (200; 4.685938ms)
Jul  7 16:38:54.385: INFO: (12) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:462/proxy/: tls qux (200; 5.153292ms)
Jul  7 16:38:54.385: INFO: (12) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:162/proxy/: bar (200; 5.153404ms)
Jul  7 16:38:54.385: INFO: (12) /api/v1/namespaces/proxy-8345/services/proxy-service-gsbxw:portname1/proxy/: foo (200; 5.17008ms)
Jul  7 16:38:54.385: INFO: (12) /api/v1/namespaces/proxy-8345/services/http:proxy-service-gsbxw:portname2/proxy/: bar (200; 5.225586ms)
Jul  7 16:38:54.385: INFO: (12) /api/v1/namespaces/proxy-8345/services/http:proxy-service-gsbxw:portname1/proxy/: foo (200; 5.266243ms)
Jul  7 16:38:54.388: INFO: (13) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:162/proxy/: bar (200; 3.246523ms)
Jul  7 16:38:54.388: INFO: (13) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:462/proxy/: tls qux (200; 3.347623ms)
Jul  7 16:38:54.389: INFO: (13) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:160/proxy/: foo (200; 4.281514ms)
Jul  7 16:38:54.390: INFO: (13) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:443/proxy/: ... (200; 4.436331ms)
Jul  7 16:38:54.390: INFO: (13) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:160/proxy/: foo (200; 4.527091ms)
Jul  7 16:38:54.390: INFO: (13) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb/proxy/: test (200; 4.437883ms)
Jul  7 16:38:54.390: INFO: (13) /api/v1/namespaces/proxy-8345/services/https:proxy-service-gsbxw:tlsportname2/proxy/: tls qux (200; 4.768118ms)
Jul  7 16:38:54.390: INFO: (13) /api/v1/namespaces/proxy-8345/services/proxy-service-gsbxw:portname2/proxy/: bar (200; 5.016014ms)
Jul  7 16:38:54.390: INFO: (13) /api/v1/namespaces/proxy-8345/services/http:proxy-service-gsbxw:portname2/proxy/: bar (200; 4.905468ms)
Jul  7 16:38:54.390: INFO: (13) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:1080/proxy/: test<... (200; 5.063175ms)
Jul  7 16:38:54.390: INFO: (13) /api/v1/namespaces/proxy-8345/services/https:proxy-service-gsbxw:tlsportname1/proxy/: tls baz (200; 4.99427ms)
Jul  7 16:38:54.390: INFO: (13) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:460/proxy/: tls baz (200; 5.016849ms)
Jul  7 16:38:54.390: INFO: (13) /api/v1/namespaces/proxy-8345/services/http:proxy-service-gsbxw:portname1/proxy/: foo (200; 4.992675ms)
Jul  7 16:38:54.390: INFO: (13) /api/v1/namespaces/proxy-8345/services/proxy-service-gsbxw:portname1/proxy/: foo (200; 5.083684ms)
Jul  7 16:38:54.393: INFO: (14) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:162/proxy/: bar (200; 2.532584ms)
Jul  7 16:38:54.395: INFO: (14) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:162/proxy/: bar (200; 4.384728ms)
Jul  7 16:38:54.395: INFO: (14) /api/v1/namespaces/proxy-8345/services/http:proxy-service-gsbxw:portname1/proxy/: foo (200; 4.895172ms)
Jul  7 16:38:54.395: INFO: (14) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:1080/proxy/: ... (200; 4.806164ms)
Jul  7 16:38:54.395: INFO: (14) /api/v1/namespaces/proxy-8345/services/http:proxy-service-gsbxw:portname2/proxy/: bar (200; 4.853364ms)
Jul  7 16:38:54.395: INFO: (14) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:160/proxy/: foo (200; 4.883818ms)
Jul  7 16:38:54.395: INFO: (14) /api/v1/namespaces/proxy-8345/services/proxy-service-gsbxw:portname2/proxy/: bar (200; 4.866202ms)
Jul  7 16:38:54.395: INFO: (14) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb/proxy/: test (200; 4.875195ms)
Jul  7 16:38:54.395: INFO: (14) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:443/proxy/: test<... (200; 5.475317ms)
Jul  7 16:38:54.396: INFO: (14) /api/v1/namespaces/proxy-8345/services/https:proxy-service-gsbxw:tlsportname2/proxy/: tls qux (200; 5.400854ms)
Jul  7 16:38:54.396: INFO: (14) /api/v1/namespaces/proxy-8345/services/proxy-service-gsbxw:portname1/proxy/: foo (200; 5.459032ms)
Jul  7 16:38:54.396: INFO: (14) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:460/proxy/: tls baz (200; 5.529833ms)
Jul  7 16:38:54.399: INFO: (15) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:162/proxy/: bar (200; 3.364886ms)
Jul  7 16:38:54.399: INFO: (15) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:1080/proxy/: ... (200; 3.397662ms)
Jul  7 16:38:54.400: INFO: (15) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:443/proxy/: test<... (200; 3.69573ms)
Jul  7 16:38:54.400: INFO: (15) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb/proxy/: test (200; 3.688099ms)
Jul  7 16:38:54.400: INFO: (15) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:160/proxy/: foo (200; 3.713125ms)
Jul  7 16:38:54.400: INFO: (15) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:462/proxy/: tls qux (200; 3.662538ms)
Jul  7 16:38:54.400: INFO: (15) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:460/proxy/: tls baz (200; 3.756268ms)
Jul  7 16:38:54.400: INFO: (15) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:160/proxy/: foo (200; 3.715143ms)
Jul  7 16:38:54.400: INFO: (15) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:162/proxy/: bar (200; 4.301689ms)
Jul  7 16:38:54.401: INFO: (15) /api/v1/namespaces/proxy-8345/services/http:proxy-service-gsbxw:portname2/proxy/: bar (200; 4.91161ms)
Jul  7 16:38:54.401: INFO: (15) /api/v1/namespaces/proxy-8345/services/http:proxy-service-gsbxw:portname1/proxy/: foo (200; 4.769385ms)
Jul  7 16:38:54.401: INFO: (15) /api/v1/namespaces/proxy-8345/services/proxy-service-gsbxw:portname1/proxy/: foo (200; 4.899685ms)
Jul  7 16:38:54.401: INFO: (15) /api/v1/namespaces/proxy-8345/services/https:proxy-service-gsbxw:tlsportname1/proxy/: tls baz (200; 4.897205ms)
Jul  7 16:38:54.401: INFO: (15) /api/v1/namespaces/proxy-8345/services/proxy-service-gsbxw:portname2/proxy/: bar (200; 4.883362ms)
Jul  7 16:38:54.401: INFO: (15) /api/v1/namespaces/proxy-8345/services/https:proxy-service-gsbxw:tlsportname2/proxy/: tls qux (200; 4.893909ms)
Jul  7 16:38:54.404: INFO: (16) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:1080/proxy/: test<... (200; 3.526411ms)
Jul  7 16:38:54.406: INFO: (16) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:160/proxy/: foo (200; 4.993237ms)
Jul  7 16:38:54.406: INFO: (16) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:162/proxy/: bar (200; 5.05077ms)
Jul  7 16:38:54.406: INFO: (16) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:1080/proxy/: ... (200; 5.400418ms)
Jul  7 16:38:54.407: INFO: (16) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:462/proxy/: tls qux (200; 5.893155ms)
Jul  7 16:38:54.407: INFO: (16) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:460/proxy/: tls baz (200; 5.866405ms)
Jul  7 16:38:54.407: INFO: (16) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb/proxy/: test (200; 5.832907ms)
Jul  7 16:38:54.407: INFO: (16) /api/v1/namespaces/proxy-8345/services/proxy-service-gsbxw:portname1/proxy/: foo (200; 6.101584ms)
Jul  7 16:38:54.407: INFO: (16) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:162/proxy/: bar (200; 6.182242ms)
Jul  7 16:38:54.407: INFO: (16) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:160/proxy/: foo (200; 6.174781ms)
Jul  7 16:38:54.407: INFO: (16) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:443/proxy/: test<... (200; 2.365919ms)
Jul  7 16:38:54.410: INFO: (17) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:462/proxy/: tls qux (200; 2.482772ms)
Jul  7 16:38:54.412: INFO: (17) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:1080/proxy/: ... (200; 3.669612ms)
Jul  7 16:38:54.412: INFO: (17) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:162/proxy/: bar (200; 4.224732ms)
Jul  7 16:38:54.412: INFO: (17) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:160/proxy/: foo (200; 4.219664ms)
Jul  7 16:38:54.412: INFO: (17) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:460/proxy/: tls baz (200; 4.232661ms)
Jul  7 16:38:54.413: INFO: (17) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:443/proxy/: test (200; 5.389911ms)
Jul  7 16:38:54.413: INFO: (17) /api/v1/namespaces/proxy-8345/services/proxy-service-gsbxw:portname1/proxy/: foo (200; 5.383273ms)
Jul  7 16:38:54.413: INFO: (17) /api/v1/namespaces/proxy-8345/services/http:proxy-service-gsbxw:portname1/proxy/: foo (200; 5.387151ms)
Jul  7 16:38:54.414: INFO: (17) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:162/proxy/: bar (200; 5.700407ms)
Jul  7 16:38:54.414: INFO: (17) /api/v1/namespaces/proxy-8345/services/https:proxy-service-gsbxw:tlsportname1/proxy/: tls baz (200; 5.745394ms)
Jul  7 16:38:54.417: INFO: (18) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:462/proxy/: tls qux (200; 3.455594ms)
Jul  7 16:38:54.417: INFO: (18) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:1080/proxy/: ... (200; 3.386742ms)
Jul  7 16:38:54.419: INFO: (18) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:1080/proxy/: test<... (200; 4.870654ms)
Jul  7 16:38:54.419: INFO: (18) /api/v1/namespaces/proxy-8345/services/proxy-service-gsbxw:portname2/proxy/: bar (200; 5.045603ms)
Jul  7 16:38:54.419: INFO: (18) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:443/proxy/: test (200; 4.979812ms)
Jul  7 16:38:54.419: INFO: (18) /api/v1/namespaces/proxy-8345/services/http:proxy-service-gsbxw:portname2/proxy/: bar (200; 5.137457ms)
Jul  7 16:38:54.419: INFO: (18) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:162/proxy/: bar (200; 5.183332ms)
Jul  7 16:38:54.419: INFO: (18) /api/v1/namespaces/proxy-8345/services/http:proxy-service-gsbxw:portname1/proxy/: foo (200; 5.235718ms)
Jul  7 16:38:54.419: INFO: (18) /api/v1/namespaces/proxy-8345/services/https:proxy-service-gsbxw:tlsportname2/proxy/: tls qux (200; 5.198877ms)
Jul  7 16:38:54.419: INFO: (18) /api/v1/namespaces/proxy-8345/services/proxy-service-gsbxw:portname1/proxy/: foo (200; 5.171428ms)
Jul  7 16:38:54.419: INFO: (18) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:460/proxy/: tls baz (200; 5.321409ms)
Jul  7 16:38:54.419: INFO: (18) /api/v1/namespaces/proxy-8345/services/https:proxy-service-gsbxw:tlsportname1/proxy/: tls baz (200; 5.326586ms)
Jul  7 16:38:54.420: INFO: (18) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:162/proxy/: bar (200; 5.934924ms)
Jul  7 16:38:54.422: INFO: (18) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:160/proxy/: foo (200; 8.352078ms)
Jul  7 16:38:54.422: INFO: (18) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:160/proxy/: foo (200; 8.397891ms)
Jul  7 16:38:54.426: INFO: (19) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:160/proxy/: foo (200; 3.368896ms)
Jul  7 16:38:54.426: INFO: (19) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:1080/proxy/: ... (200; 3.579198ms)
Jul  7 16:38:54.426: INFO: (19) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:162/proxy/: bar (200; 3.595935ms)
Jul  7 16:38:54.426: INFO: (19) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:462/proxy/: tls qux (200; 3.638184ms)
Jul  7 16:38:54.426: INFO: (19) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:443/proxy/: test<... (200; 3.706111ms)
Jul  7 16:38:54.426: INFO: (19) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb/proxy/: test (200; 3.709315ms)
Jul  7 16:38:54.426: INFO: (19) /api/v1/namespaces/proxy-8345/pods/http:proxy-service-gsbxw-6v9hb:162/proxy/: bar (200; 3.712632ms)
Jul  7 16:38:54.426: INFO: (19) /api/v1/namespaces/proxy-8345/pods/https:proxy-service-gsbxw-6v9hb:460/proxy/: tls baz (200; 3.749474ms)
Jul  7 16:38:54.426: INFO: (19) /api/v1/namespaces/proxy-8345/pods/proxy-service-gsbxw-6v9hb:160/proxy/: foo (200; 3.722135ms)
Jul  7 16:38:54.427: INFO: (19) /api/v1/namespaces/proxy-8345/services/proxy-service-gsbxw:portname2/proxy/: bar (200; 4.640972ms)
Jul  7 16:38:54.427: INFO: (19) /api/v1/namespaces/proxy-8345/services/http:proxy-service-gsbxw:portname2/proxy/: bar (200; 4.753738ms)
Jul  7 16:38:54.427: INFO: (19) /api/v1/namespaces/proxy-8345/services/proxy-service-gsbxw:portname1/proxy/: foo (200; 4.876563ms)
Jul  7 16:38:54.427: INFO: (19) /api/v1/namespaces/proxy-8345/services/https:proxy-service-gsbxw:tlsportname1/proxy/: tls baz (200; 5.017994ms)
Jul  7 16:38:54.427: INFO: (19) /api/v1/namespaces/proxy-8345/services/https:proxy-service-gsbxw:tlsportname2/proxy/: tls qux (200; 4.984674ms)
Jul  7 16:38:54.427: INFO: (19) /api/v1/namespaces/proxy-8345/services/http:proxy-service-gsbxw:portname1/proxy/: foo (200; 5.107716ms)
STEP: deleting ReplicationController proxy-service-gsbxw in namespace proxy-8345, will wait for the garbage collector to delete the pods
Jul  7 16:38:54.487: INFO: Deleting ReplicationController proxy-service-gsbxw took: 7.526674ms
Jul  7 16:38:54.787: INFO: Terminating ReplicationController proxy-service-gsbxw pods took: 300.227239ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:39:06.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-8345" for this suite.

• [SLOW TEST:26.453 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":278,"completed":197,"skipped":3428,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:39:06.320: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-7d01e0d3-54e6-44b6-87fd-9b198f69a1df
STEP: Creating a pod to test consume configMaps
Jul  7 16:39:06.466: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-20732d28-09be-4e03-8a14-befeb3380c0d" in namespace "projected-1199" to be "success or failure"
Jul  7 16:39:06.476: INFO: Pod "pod-projected-configmaps-20732d28-09be-4e03-8a14-befeb3380c0d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.301875ms
Jul  7 16:39:08.479: INFO: Pod "pod-projected-configmaps-20732d28-09be-4e03-8a14-befeb3380c0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012725226s
Jul  7 16:39:10.483: INFO: Pod "pod-projected-configmaps-20732d28-09be-4e03-8a14-befeb3380c0d": Phase="Running", Reason="", readiness=true. Elapsed: 4.016935766s
Jul  7 16:39:12.487: INFO: Pod "pod-projected-configmaps-20732d28-09be-4e03-8a14-befeb3380c0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021054734s
STEP: Saw pod success
Jul  7 16:39:12.487: INFO: Pod "pod-projected-configmaps-20732d28-09be-4e03-8a14-befeb3380c0d" satisfied condition "success or failure"
Jul  7 16:39:12.490: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-20732d28-09be-4e03-8a14-befeb3380c0d container projected-configmap-volume-test: 
STEP: delete the pod
Jul  7 16:39:12.533: INFO: Waiting for pod pod-projected-configmaps-20732d28-09be-4e03-8a14-befeb3380c0d to disappear
Jul  7 16:39:12.537: INFO: Pod pod-projected-configmaps-20732d28-09be-4e03-8a14-befeb3380c0d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:39:12.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1199" for this suite.

• [SLOW TEST:6.223 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":198,"skipped":3450,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:39:12.543: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  7 16:39:13.214: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  7 16:39:15.865: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729736753, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729736753, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729736753, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729736753, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 16:39:17.913: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729736753, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729736753, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729736753, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729736753, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 16:39:20.112: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729736753, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729736753, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729736753, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729736753, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  7 16:39:22.943: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:39:23.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8429" for this suite.
STEP: Destroying namespace "webhook-8429-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.056 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":199,"skipped":3466,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:39:23.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on node default medium
Jul  7 16:39:23.714: INFO: Waiting up to 5m0s for pod "pod-8625e8e3-16b6-48cf-8374-66a42aa5d97f" in namespace "emptydir-2094" to be "success or failure"
Jul  7 16:39:23.794: INFO: Pod "pod-8625e8e3-16b6-48cf-8374-66a42aa5d97f": Phase="Pending", Reason="", readiness=false. Elapsed: 79.71842ms
Jul  7 16:39:25.811: INFO: Pod "pod-8625e8e3-16b6-48cf-8374-66a42aa5d97f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096585686s
Jul  7 16:39:27.841: INFO: Pod "pod-8625e8e3-16b6-48cf-8374-66a42aa5d97f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.127249192s
Jul  7 16:39:29.845: INFO: Pod "pod-8625e8e3-16b6-48cf-8374-66a42aa5d97f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.130989546s
STEP: Saw pod success
Jul  7 16:39:29.845: INFO: Pod "pod-8625e8e3-16b6-48cf-8374-66a42aa5d97f" satisfied condition "success or failure"
Jul  7 16:39:29.848: INFO: Trying to get logs from node jerma-worker pod pod-8625e8e3-16b6-48cf-8374-66a42aa5d97f container test-container: 
STEP: delete the pod
Jul  7 16:39:29.869: INFO: Waiting for pod pod-8625e8e3-16b6-48cf-8374-66a42aa5d97f to disappear
Jul  7 16:39:29.873: INFO: Pod pod-8625e8e3-16b6-48cf-8374-66a42aa5d97f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:39:29.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2094" for this suite.

• [SLOW TEST:6.283 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":200,"skipped":3482,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:39:29.884: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:40:29.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4275" for this suite.

• [SLOW TEST:60.092 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":201,"skipped":3508,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:40:29.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jul  7 16:40:30.054: INFO: Waiting up to 5m0s for pod "pod-913140e7-413f-46bd-9b83-e96734ec7f8f" in namespace "emptydir-3689" to be "success or failure"
Jul  7 16:40:30.060: INFO: Pod "pod-913140e7-413f-46bd-9b83-e96734ec7f8f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.996749ms
Jul  7 16:40:32.957: INFO: Pod "pod-913140e7-413f-46bd-9b83-e96734ec7f8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.903183676s
Jul  7 16:40:35.022: INFO: Pod "pod-913140e7-413f-46bd-9b83-e96734ec7f8f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.967699654s
Jul  7 16:40:37.026: INFO: Pod "pod-913140e7-413f-46bd-9b83-e96734ec7f8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.97144263s
STEP: Saw pod success
Jul  7 16:40:37.026: INFO: Pod "pod-913140e7-413f-46bd-9b83-e96734ec7f8f" satisfied condition "success or failure"
Jul  7 16:40:37.028: INFO: Trying to get logs from node jerma-worker2 pod pod-913140e7-413f-46bd-9b83-e96734ec7f8f container test-container: 
STEP: delete the pod
Jul  7 16:40:37.093: INFO: Waiting for pod pod-913140e7-413f-46bd-9b83-e96734ec7f8f to disappear
Jul  7 16:40:37.144: INFO: Pod pod-913140e7-413f-46bd-9b83-e96734ec7f8f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:40:37.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3689" for this suite.

• [SLOW TEST:7.177 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":202,"skipped":3533,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:40:37.154: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with configMap that has name projected-configmap-test-upd-98a477f6-80ac-478a-a5dc-57b202992715
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-98a477f6-80ac-478a-a5dc-57b202992715
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:41:50.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-899" for this suite.

• [SLOW TEST:73.220 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3534,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:41:50.374: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jul  7 16:42:04.550: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8187 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 16:42:04.550: INFO: >>> kubeConfig: /root/.kube/config
I0707 16:42:04.583997       6 log.go:172] (0xc0023984d0) (0xc001a6bc20) Create stream
I0707 16:42:04.584035       6 log.go:172] (0xc0023984d0) (0xc001a6bc20) Stream added, broadcasting: 1
I0707 16:42:04.586286       6 log.go:172] (0xc0023984d0) Reply frame received for 1
I0707 16:42:04.586336       6 log.go:172] (0xc0023984d0) (0xc0026dd2c0) Create stream
I0707 16:42:04.586352       6 log.go:172] (0xc0023984d0) (0xc0026dd2c0) Stream added, broadcasting: 3
I0707 16:42:04.587422       6 log.go:172] (0xc0023984d0) Reply frame received for 3
I0707 16:42:04.587465       6 log.go:172] (0xc0023984d0) (0xc00160fae0) Create stream
I0707 16:42:04.587483       6 log.go:172] (0xc0023984d0) (0xc00160fae0) Stream added, broadcasting: 5
I0707 16:42:04.588617       6 log.go:172] (0xc0023984d0) Reply frame received for 5
I0707 16:42:04.658579       6 log.go:172] (0xc0023984d0) Data frame received for 3
I0707 16:42:04.658613       6 log.go:172] (0xc0026dd2c0) (3) Data frame handling
I0707 16:42:04.658626       6 log.go:172] (0xc0026dd2c0) (3) Data frame sent
I0707 16:42:04.658636       6 log.go:172] (0xc0023984d0) Data frame received for 3
I0707 16:42:04.658650       6 log.go:172] (0xc0026dd2c0) (3) Data frame handling
I0707 16:42:04.658676       6 log.go:172] (0xc0023984d0) Data frame received for 5
I0707 16:42:04.658689       6 log.go:172] (0xc00160fae0) (5) Data frame handling
I0707 16:42:04.659952       6 log.go:172] (0xc0023984d0) Data frame received for 1
I0707 16:42:04.659968       6 log.go:172] (0xc001a6bc20) (1) Data frame handling
I0707 16:42:04.659981       6 log.go:172] (0xc001a6bc20) (1) Data frame sent
I0707 16:42:04.659998       6 log.go:172] (0xc0023984d0) (0xc001a6bc20) Stream removed, broadcasting: 1
I0707 16:42:04.660013       6 log.go:172] (0xc0023984d0) Go away received
I0707 16:42:04.660163       6 log.go:172] (0xc0023984d0) (0xc001a6bc20) Stream removed, broadcasting: 1
I0707 16:42:04.660238       6 log.go:172] (0xc0023984d0) (0xc0026dd2c0) Stream removed, broadcasting: 3
I0707 16:42:04.660256       6 log.go:172] (0xc0023984d0) (0xc00160fae0) Stream removed, broadcasting: 5
Jul  7 16:42:04.660: INFO: Exec stderr: ""
Jul  7 16:42:04.660: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8187 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 16:42:04.660: INFO: >>> kubeConfig: /root/.kube/config
I0707 16:42:04.688937       6 log.go:172] (0xc0028ea4d0) (0xc0027403c0) Create stream
I0707 16:42:04.689015       6 log.go:172] (0xc0028ea4d0) (0xc0027403c0) Stream added, broadcasting: 1
I0707 16:42:04.691591       6 log.go:172] (0xc0028ea4d0) Reply frame received for 1
I0707 16:42:04.691647       6 log.go:172] (0xc0028ea4d0) (0xc00160fb80) Create stream
I0707 16:42:04.691666       6 log.go:172] (0xc0028ea4d0) (0xc00160fb80) Stream added, broadcasting: 3
I0707 16:42:04.692506       6 log.go:172] (0xc0028ea4d0) Reply frame received for 3
I0707 16:42:04.692531       6 log.go:172] (0xc0028ea4d0) (0xc001a6bcc0) Create stream
I0707 16:42:04.692540       6 log.go:172] (0xc0028ea4d0) (0xc001a6bcc0) Stream added, broadcasting: 5
I0707 16:42:04.693541       6 log.go:172] (0xc0028ea4d0) Reply frame received for 5
I0707 16:42:04.772757       6 log.go:172] (0xc0028ea4d0) Data frame received for 5
I0707 16:42:04.772801       6 log.go:172] (0xc0028ea4d0) Data frame received for 3
I0707 16:42:04.772826       6 log.go:172] (0xc00160fb80) (3) Data frame handling
I0707 16:42:04.772851       6 log.go:172] (0xc00160fb80) (3) Data frame sent
I0707 16:42:04.772892       6 log.go:172] (0xc0028ea4d0) Data frame received for 3
I0707 16:42:04.772933       6 log.go:172] (0xc00160fb80) (3) Data frame handling
I0707 16:42:04.772973       6 log.go:172] (0xc001a6bcc0) (5) Data frame handling
I0707 16:42:04.774677       6 log.go:172] (0xc0028ea4d0) Data frame received for 1
I0707 16:42:04.774707       6 log.go:172] (0xc0027403c0) (1) Data frame handling
I0707 16:42:04.774736       6 log.go:172] (0xc0027403c0) (1) Data frame sent
I0707 16:42:04.774755       6 log.go:172] (0xc0028ea4d0) (0xc0027403c0) Stream removed, broadcasting: 1
I0707 16:42:04.774772       6 log.go:172] (0xc0028ea4d0) Go away received
I0707 16:42:04.774986       6 log.go:172] (0xc0028ea4d0) (0xc0027403c0) Stream removed, broadcasting: 1
I0707 16:42:04.775019       6 log.go:172] (0xc0028ea4d0) (0xc00160fb80) Stream removed, broadcasting: 3
I0707 16:42:04.775042       6 log.go:172] (0xc0028ea4d0) (0xc001a6bcc0) Stream removed, broadcasting: 5
Jul  7 16:42:04.775: INFO: Exec stderr: ""
Jul  7 16:42:04.775: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8187 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 16:42:04.775: INFO: >>> kubeConfig: /root/.kube/config
I0707 16:42:04.806054       6 log.go:172] (0xc002398c60) (0xc001a6bf40) Create stream
I0707 16:42:04.806083       6 log.go:172] (0xc002398c60) (0xc001a6bf40) Stream added, broadcasting: 1
I0707 16:42:04.808044       6 log.go:172] (0xc002398c60) Reply frame received for 1
I0707 16:42:04.808079       6 log.go:172] (0xc002398c60) (0xc00160fc20) Create stream
I0707 16:42:04.808090       6 log.go:172] (0xc002398c60) (0xc00160fc20) Stream added, broadcasting: 3
I0707 16:42:04.808889       6 log.go:172] (0xc002398c60) Reply frame received for 3
I0707 16:42:04.808926       6 log.go:172] (0xc002398c60) (0xc0012225a0) Create stream
I0707 16:42:04.808942       6 log.go:172] (0xc002398c60) (0xc0012225a0) Stream added, broadcasting: 5
I0707 16:42:04.810067       6 log.go:172] (0xc002398c60) Reply frame received for 5
I0707 16:42:04.882204       6 log.go:172] (0xc002398c60) Data frame received for 3
I0707 16:42:04.882243       6 log.go:172] (0xc00160fc20) (3) Data frame handling
I0707 16:42:04.882263       6 log.go:172] (0xc00160fc20) (3) Data frame sent
I0707 16:42:04.882646       6 log.go:172] (0xc002398c60) Data frame received for 3
I0707 16:42:04.882678       6 log.go:172] (0xc00160fc20) (3) Data frame handling
I0707 16:42:04.882981       6 log.go:172] (0xc002398c60) Data frame received for 5
I0707 16:42:04.882996       6 log.go:172] (0xc0012225a0) (5) Data frame handling
I0707 16:42:04.894787       6 log.go:172] (0xc002398c60) Data frame received for 1
I0707 16:42:04.894812       6 log.go:172] (0xc001a6bf40) (1) Data frame handling
I0707 16:42:04.894820       6 log.go:172] (0xc001a6bf40) (1) Data frame sent
I0707 16:42:04.894829       6 log.go:172] (0xc002398c60) (0xc001a6bf40) Stream removed, broadcasting: 1
I0707 16:42:04.894854       6 log.go:172] (0xc002398c60) Go away received
I0707 16:42:04.895090       6 log.go:172] (0xc002398c60) (0xc001a6bf40) Stream removed, broadcasting: 1
I0707 16:42:04.895109       6 log.go:172] (0xc002398c60) (0xc00160fc20) Stream removed, broadcasting: 3
I0707 16:42:04.895119       6 log.go:172] (0xc002398c60) (0xc0012225a0) Stream removed, broadcasting: 5
Jul  7 16:42:04.895: INFO: Exec stderr: ""
Jul  7 16:42:04.895: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8187 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 16:42:04.895: INFO: >>> kubeConfig: /root/.kube/config
I0707 16:42:04.979429       6 log.go:172] (0xc001dee370) (0xc0021e01e0) Create stream
I0707 16:42:04.979538       6 log.go:172] (0xc001dee370) (0xc0021e01e0) Stream added, broadcasting: 1
I0707 16:42:04.981672       6 log.go:172] (0xc001dee370) Reply frame received for 1
I0707 16:42:04.981704       6 log.go:172] (0xc001dee370) (0xc002740460) Create stream
I0707 16:42:04.981715       6 log.go:172] (0xc001dee370) (0xc002740460) Stream added, broadcasting: 3
I0707 16:42:04.984814       6 log.go:172] (0xc001dee370) Reply frame received for 3
I0707 16:42:04.984854       6 log.go:172] (0xc001dee370) (0xc002740500) Create stream
I0707 16:42:04.984872       6 log.go:172] (0xc001dee370) (0xc002740500) Stream added, broadcasting: 5
I0707 16:42:04.985906       6 log.go:172] (0xc001dee370) Reply frame received for 5
I0707 16:42:05.048367       6 log.go:172] (0xc001dee370) Data frame received for 5
I0707 16:42:05.048396       6 log.go:172] (0xc002740500) (5) Data frame handling
I0707 16:42:05.048411       6 log.go:172] (0xc001dee370) Data frame received for 3
I0707 16:42:05.048417       6 log.go:172] (0xc002740460) (3) Data frame handling
I0707 16:42:05.048428       6 log.go:172] (0xc002740460) (3) Data frame sent
I0707 16:42:05.048438       6 log.go:172] (0xc001dee370) Data frame received for 3
I0707 16:42:05.048446       6 log.go:172] (0xc002740460) (3) Data frame handling
I0707 16:42:05.049438       6 log.go:172] (0xc001dee370) Data frame received for 1
I0707 16:42:05.049465       6 log.go:172] (0xc0021e01e0) (1) Data frame handling
I0707 16:42:05.049478       6 log.go:172] (0xc0021e01e0) (1) Data frame sent
I0707 16:42:05.049487       6 log.go:172] (0xc001dee370) (0xc0021e01e0) Stream removed, broadcasting: 1
I0707 16:42:05.049506       6 log.go:172] (0xc001dee370) Go away received
I0707 16:42:05.049644       6 log.go:172] (0xc001dee370) (0xc0021e01e0) Stream removed, broadcasting: 1
I0707 16:42:05.049661       6 log.go:172] (0xc001dee370) (0xc002740460) Stream removed, broadcasting: 3
I0707 16:42:05.049670       6 log.go:172] (0xc001dee370) (0xc002740500) Stream removed, broadcasting: 5
Jul  7 16:42:05.049: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jul  7 16:42:05.049: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8187 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 16:42:05.049: INFO: >>> kubeConfig: /root/.kube/config
I0707 16:42:05.192166       6 log.go:172] (0xc002399290) (0xc001222a00) Create stream
I0707 16:42:05.192195       6 log.go:172] (0xc002399290) (0xc001222a00) Stream added, broadcasting: 1
I0707 16:42:05.194193       6 log.go:172] (0xc002399290) Reply frame received for 1
I0707 16:42:05.194228       6 log.go:172] (0xc002399290) (0xc0021e0280) Create stream
I0707 16:42:05.194239       6 log.go:172] (0xc002399290) (0xc0021e0280) Stream added, broadcasting: 3
I0707 16:42:05.195105       6 log.go:172] (0xc002399290) Reply frame received for 3
I0707 16:42:05.195144       6 log.go:172] (0xc002399290) (0xc0027405a0) Create stream
I0707 16:42:05.195158       6 log.go:172] (0xc002399290) (0xc0027405a0) Stream added, broadcasting: 5
I0707 16:42:05.195826       6 log.go:172] (0xc002399290) Reply frame received for 5
I0707 16:42:05.258228       6 log.go:172] (0xc002399290) Data frame received for 3
I0707 16:42:05.258270       6 log.go:172] (0xc0021e0280) (3) Data frame handling
I0707 16:42:05.258279       6 log.go:172] (0xc0021e0280) (3) Data frame sent
I0707 16:42:05.258284       6 log.go:172] (0xc002399290) Data frame received for 3
I0707 16:42:05.258288       6 log.go:172] (0xc0021e0280) (3) Data frame handling
I0707 16:42:05.258307       6 log.go:172] (0xc002399290) Data frame received for 5
I0707 16:42:05.258326       6 log.go:172] (0xc0027405a0) (5) Data frame handling
I0707 16:42:05.259357       6 log.go:172] (0xc002399290) Data frame received for 1
I0707 16:42:05.259389       6 log.go:172] (0xc001222a00) (1) Data frame handling
I0707 16:42:05.259423       6 log.go:172] (0xc001222a00) (1) Data frame sent
I0707 16:42:05.259459       6 log.go:172] (0xc002399290) (0xc001222a00) Stream removed, broadcasting: 1
I0707 16:42:05.259486       6 log.go:172] (0xc002399290) Go away received
I0707 16:42:05.259644       6 log.go:172] (0xc002399290) (0xc001222a00) Stream removed, broadcasting: 1
I0707 16:42:05.259681       6 log.go:172] (0xc002399290) (0xc0021e0280) Stream removed, broadcasting: 3
I0707 16:42:05.259703       6 log.go:172] (0xc002399290) (0xc0027405a0) Stream removed, broadcasting: 5
Jul  7 16:42:05.259: INFO: Exec stderr: ""
Jul  7 16:42:05.259: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8187 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 16:42:05.259: INFO: >>> kubeConfig: /root/.kube/config
I0707 16:42:05.289584       6 log.go:172] (0xc001db7e40) (0xc000eca8c0) Create stream
I0707 16:42:05.289622       6 log.go:172] (0xc001db7e40) (0xc000eca8c0) Stream added, broadcasting: 1
I0707 16:42:05.292166       6 log.go:172] (0xc001db7e40) Reply frame received for 1
I0707 16:42:05.292223       6 log.go:172] (0xc001db7e40) (0xc0021e0320) Create stream
I0707 16:42:05.292244       6 log.go:172] (0xc001db7e40) (0xc0021e0320) Stream added, broadcasting: 3
I0707 16:42:05.293807       6 log.go:172] (0xc001db7e40) Reply frame received for 3
I0707 16:42:05.293841       6 log.go:172] (0xc001db7e40) (0xc001222aa0) Create stream
I0707 16:42:05.293849       6 log.go:172] (0xc001db7e40) (0xc001222aa0) Stream added, broadcasting: 5
I0707 16:42:05.294829       6 log.go:172] (0xc001db7e40) Reply frame received for 5
I0707 16:42:05.357099       6 log.go:172] (0xc001db7e40) Data frame received for 3
I0707 16:42:05.357545       6 log.go:172] (0xc0021e0320) (3) Data frame handling
I0707 16:42:05.357577       6 log.go:172] (0xc0021e0320) (3) Data frame sent
I0707 16:42:05.357722       6 log.go:172] (0xc001db7e40) Data frame received for 3
I0707 16:42:05.357743       6 log.go:172] (0xc0021e0320) (3) Data frame handling
I0707 16:42:05.357773       6 log.go:172] (0xc001db7e40) Data frame received for 5
I0707 16:42:05.357793       6 log.go:172] (0xc001222aa0) (5) Data frame handling
I0707 16:42:05.358632       6 log.go:172] (0xc001db7e40) Data frame received for 1
I0707 16:42:05.358662       6 log.go:172] (0xc000eca8c0) (1) Data frame handling
I0707 16:42:05.358681       6 log.go:172] (0xc000eca8c0) (1) Data frame sent
I0707 16:42:05.358706       6 log.go:172] (0xc001db7e40) (0xc000eca8c0) Stream removed, broadcasting: 1
I0707 16:42:05.358728       6 log.go:172] (0xc001db7e40) Go away received
I0707 16:42:05.358896       6 log.go:172] (0xc001db7e40) (0xc000eca8c0) Stream removed, broadcasting: 1
I0707 16:42:05.358936       6 log.go:172] (0xc001db7e40) (0xc0021e0320) Stream removed, broadcasting: 3
I0707 16:42:05.358964       6 log.go:172] (0xc001db7e40) (0xc001222aa0) Stream removed, broadcasting: 5
Jul  7 16:42:05.358: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jul  7 16:42:05.359: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8187 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 16:42:05.359: INFO: >>> kubeConfig: /root/.kube/config
I0707 16:42:05.400092       6 log.go:172] (0xc0028eabb0) (0xc0027408c0) Create stream
I0707 16:42:05.400127       6 log.go:172] (0xc0028eabb0) (0xc0027408c0) Stream added, broadcasting: 1
I0707 16:42:05.402074       6 log.go:172] (0xc0028eabb0) Reply frame received for 1
I0707 16:42:05.402114       6 log.go:172] (0xc0028eabb0) (0xc002740a00) Create stream
I0707 16:42:05.402124       6 log.go:172] (0xc0028eabb0) (0xc002740a00) Stream added, broadcasting: 3
I0707 16:42:05.403131       6 log.go:172] (0xc0028eabb0) Reply frame received for 3
I0707 16:42:05.403181       6 log.go:172] (0xc0028eabb0) (0xc002740be0) Create stream
I0707 16:42:05.403197       6 log.go:172] (0xc0028eabb0) (0xc002740be0) Stream added, broadcasting: 5
I0707 16:42:05.404170       6 log.go:172] (0xc0028eabb0) Reply frame received for 5
I0707 16:42:05.465278       6 log.go:172] (0xc0028eabb0) Data frame received for 3
I0707 16:42:05.465357       6 log.go:172] (0xc002740a00) (3) Data frame handling
I0707 16:42:05.465371       6 log.go:172] (0xc002740a00) (3) Data frame sent
I0707 16:42:05.465380       6 log.go:172] (0xc0028eabb0) Data frame received for 3
I0707 16:42:05.465400       6 log.go:172] (0xc002740a00) (3) Data frame handling
I0707 16:42:05.465414       6 log.go:172] (0xc0028eabb0) Data frame received for 5
I0707 16:42:05.465425       6 log.go:172] (0xc002740be0) (5) Data frame handling
I0707 16:42:05.466760       6 log.go:172] (0xc0028eabb0) Data frame received for 1
I0707 16:42:05.466798       6 log.go:172] (0xc0027408c0) (1) Data frame handling
I0707 16:42:05.466818       6 log.go:172] (0xc0027408c0) (1) Data frame sent
I0707 16:42:05.466837       6 log.go:172] (0xc0028eabb0) (0xc0027408c0) Stream removed, broadcasting: 1
I0707 16:42:05.466865       6 log.go:172] (0xc0028eabb0) Go away received
I0707 16:42:05.466925       6 log.go:172] (0xc0028eabb0) (0xc0027408c0) Stream removed, broadcasting: 1
I0707 16:42:05.466950       6 log.go:172] (0xc0028eabb0) (0xc002740a00) Stream removed, broadcasting: 3
I0707 16:42:05.466960       6 log.go:172] (0xc0028eabb0) (0xc002740be0) Stream removed, broadcasting: 5
Jul  7 16:42:05.466: INFO: Exec stderr: ""
Jul  7 16:42:05.467: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8187 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 16:42:05.467: INFO: >>> kubeConfig: /root/.kube/config
I0707 16:42:05.492658       6 log.go:172] (0xc001deeb00) (0xc0021e0640) Create stream
I0707 16:42:05.492681       6 log.go:172] (0xc001deeb00) (0xc0021e0640) Stream added, broadcasting: 1
I0707 16:42:05.494950       6 log.go:172] (0xc001deeb00) Reply frame received for 1
I0707 16:42:05.495002       6 log.go:172] (0xc001deeb00) (0xc000eca960) Create stream
I0707 16:42:05.495020       6 log.go:172] (0xc001deeb00) (0xc000eca960) Stream added, broadcasting: 3
I0707 16:42:05.495898       6 log.go:172] (0xc001deeb00) Reply frame received for 3
I0707 16:42:05.495915       6 log.go:172] (0xc001deeb00) (0xc0021e06e0) Create stream
I0707 16:42:05.495920       6 log.go:172] (0xc001deeb00) (0xc0021e06e0) Stream added, broadcasting: 5
I0707 16:42:05.496928       6 log.go:172] (0xc001deeb00) Reply frame received for 5
I0707 16:42:05.554743       6 log.go:172] (0xc001deeb00) Data frame received for 5
I0707 16:42:05.554780       6 log.go:172] (0xc0021e06e0) (5) Data frame handling
I0707 16:42:05.554812       6 log.go:172] (0xc001deeb00) Data frame received for 3
I0707 16:42:05.554836       6 log.go:172] (0xc000eca960) (3) Data frame handling
I0707 16:42:05.554864       6 log.go:172] (0xc000eca960) (3) Data frame sent
I0707 16:42:05.554894       6 log.go:172] (0xc001deeb00) Data frame received for 3
I0707 16:42:05.554913       6 log.go:172] (0xc000eca960) (3) Data frame handling
I0707 16:42:05.555682       6 log.go:172] (0xc001deeb00) Data frame received for 1
I0707 16:42:05.555695       6 log.go:172] (0xc0021e0640) (1) Data frame handling
I0707 16:42:05.555701       6 log.go:172] (0xc0021e0640) (1) Data frame sent
I0707 16:42:05.555710       6 log.go:172] (0xc001deeb00) (0xc0021e0640) Stream removed, broadcasting: 1
I0707 16:42:05.555798       6 log.go:172] (0xc001deeb00) (0xc0021e0640) Stream removed, broadcasting: 1
I0707 16:42:05.555820       6 log.go:172] (0xc001deeb00) (0xc000eca960) Stream removed, broadcasting: 3
I0707 16:42:05.555827       6 log.go:172] (0xc001deeb00) (0xc0021e06e0) Stream removed, broadcasting: 5
Jul  7 16:42:05.555: INFO: Exec stderr: ""
Jul  7 16:42:05.555: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8187 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 16:42:05.555: INFO: >>> kubeConfig: /root/.kube/config
I0707 16:42:05.555923       6 log.go:172] (0xc001deeb00) Go away received
I0707 16:42:05.583452       6 log.go:172] (0xc0023998c0) (0xc001223400) Create stream
I0707 16:42:05.583524       6 log.go:172] (0xc0023998c0) (0xc001223400) Stream added, broadcasting: 1
I0707 16:42:05.590739       6 log.go:172] (0xc0023998c0) Reply frame received for 1
I0707 16:42:05.590800       6 log.go:172] (0xc0023998c0) (0xc0026dd360) Create stream
I0707 16:42:05.590817       6 log.go:172] (0xc0023998c0) (0xc0026dd360) Stream added, broadcasting: 3
I0707 16:42:05.592553       6 log.go:172] (0xc0023998c0) Reply frame received for 3
I0707 16:42:05.592594       6 log.go:172] (0xc0023998c0) (0xc000ecac80) Create stream
I0707 16:42:05.592613       6 log.go:172] (0xc0023998c0) (0xc000ecac80) Stream added, broadcasting: 5
I0707 16:42:05.594094       6 log.go:172] (0xc0023998c0) Reply frame received for 5
I0707 16:42:05.636297       6 log.go:172] (0xc0023998c0) Data frame received for 5
I0707 16:42:05.636331       6 log.go:172] (0xc000ecac80) (5) Data frame handling
I0707 16:42:05.636362       6 log.go:172] (0xc0023998c0) Data frame received for 3
I0707 16:42:05.636374       6 log.go:172] (0xc0026dd360) (3) Data frame handling
I0707 16:42:05.636384       6 log.go:172] (0xc0026dd360) (3) Data frame sent
I0707 16:42:05.636397       6 log.go:172] (0xc0023998c0) Data frame received for 3
I0707 16:42:05.636411       6 log.go:172] (0xc0026dd360) (3) Data frame handling
I0707 16:42:05.637762       6 log.go:172] (0xc0023998c0) Data frame received for 1
I0707 16:42:05.637815       6 log.go:172] (0xc001223400) (1) Data frame handling
I0707 16:42:05.637846       6 log.go:172] (0xc001223400) (1) Data frame sent
I0707 16:42:05.637866       6 log.go:172] (0xc0023998c0) (0xc001223400) Stream removed, broadcasting: 1
I0707 16:42:05.637887       6 log.go:172] (0xc0023998c0) Go away received
I0707 16:42:05.637980       6 log.go:172] (0xc0023998c0) (0xc001223400) Stream removed, broadcasting: 1
I0707 16:42:05.638001       6 log.go:172] (0xc0023998c0) (0xc0026dd360) Stream removed, broadcasting: 3
I0707 16:42:05.638014       6 log.go:172] (0xc0023998c0) (0xc000ecac80) Stream removed, broadcasting: 5
Jul  7 16:42:05.638: INFO: Exec stderr: ""
Jul  7 16:42:05.638: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8187 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 16:42:05.638: INFO: >>> kubeConfig: /root/.kube/config
I0707 16:42:05.664193       6 log.go:172] (0xc001def290) (0xc0021e0960) Create stream
I0707 16:42:05.664227       6 log.go:172] (0xc001def290) (0xc0021e0960) Stream added, broadcasting: 1
I0707 16:42:05.667517       6 log.go:172] (0xc001def290) Reply frame received for 1
I0707 16:42:05.667581       6 log.go:172] (0xc001def290) (0xc0021e0a00) Create stream
I0707 16:42:05.667606       6 log.go:172] (0xc001def290) (0xc0021e0a00) Stream added, broadcasting: 3
I0707 16:42:05.669840       6 log.go:172] (0xc001def290) Reply frame received for 3
I0707 16:42:05.669874       6 log.go:172] (0xc001def290) (0xc0021e0aa0) Create stream
I0707 16:42:05.669897       6 log.go:172] (0xc001def290) (0xc0021e0aa0) Stream added, broadcasting: 5
I0707 16:42:05.671305       6 log.go:172] (0xc001def290) Reply frame received for 5
I0707 16:42:05.750497       6 log.go:172] (0xc001def290) Data frame received for 5
I0707 16:42:05.750540       6 log.go:172] (0xc0021e0aa0) (5) Data frame handling
I0707 16:42:05.750576       6 log.go:172] (0xc001def290) Data frame received for 3
I0707 16:42:05.750613       6 log.go:172] (0xc0021e0a00) (3) Data frame handling
I0707 16:42:05.750638       6 log.go:172] (0xc0021e0a00) (3) Data frame sent
I0707 16:42:05.750652       6 log.go:172] (0xc001def290) Data frame received for 3
I0707 16:42:05.750667       6 log.go:172] (0xc0021e0a00) (3) Data frame handling
I0707 16:42:05.752286       6 log.go:172] (0xc001def290) Data frame received for 1
I0707 16:42:05.752331       6 log.go:172] (0xc0021e0960) (1) Data frame handling
I0707 16:42:05.752379       6 log.go:172] (0xc0021e0960) (1) Data frame sent
I0707 16:42:05.752405       6 log.go:172] (0xc001def290) (0xc0021e0960) Stream removed, broadcasting: 1
I0707 16:42:05.752432       6 log.go:172] (0xc001def290) Go away received
I0707 16:42:05.752656       6 log.go:172] (0xc001def290) (0xc0021e0960) Stream removed, broadcasting: 1
I0707 16:42:05.752686       6 log.go:172] (0xc001def290) (0xc0021e0a00) Stream removed, broadcasting: 3
I0707 16:42:05.752715       6 log.go:172] (0xc001def290) (0xc0021e0aa0) Stream removed, broadcasting: 5
Jul  7 16:42:05.752: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:42:05.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-8187" for this suite.

• [SLOW TEST:15.386 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":204,"skipped":3551,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:42:05.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:42:07.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2108" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":205,"skipped":3560,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:42:07.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[BeforeEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1754
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jul  7 16:42:08.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-4233'
Jul  7 16:42:08.798: INFO: stderr: ""
Jul  7 16:42:08.798: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1759
Jul  7 16:42:09.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-4233'
Jul  7 16:42:17.095: INFO: stderr: ""
Jul  7 16:42:17.095: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:42:17.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4233" for this suite.

• [SLOW TEST:9.588 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1750
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":278,"completed":206,"skipped":3572,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:42:17.582: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jul  7 16:42:18.799: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-8764 /api/v1/namespaces/watch-8764/configmaps/e2e-watch-test-watch-closed 57f2f39f-e513-468d-9596-eeedc5c2d3ca 947802 0 2020-07-07 16:42:18 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul  7 16:42:18.799: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-8764 /api/v1/namespaces/watch-8764/configmaps/e2e-watch-test-watch-closed 57f2f39f-e513-468d-9596-eeedc5c2d3ca 947803 0 2020-07-07 16:42:18 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jul  7 16:42:19.053: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-8764 /api/v1/namespaces/watch-8764/configmaps/e2e-watch-test-watch-closed 57f2f39f-e513-468d-9596-eeedc5c2d3ca 947804 0 2020-07-07 16:42:18 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul  7 16:42:19.054: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-8764 /api/v1/namespaces/watch-8764/configmaps/e2e-watch-test-watch-closed 57f2f39f-e513-468d-9596-eeedc5c2d3ca 947805 0 2020-07-07 16:42:18 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:42:19.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8764" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":207,"skipped":3573,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:42:19.220: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  7 16:42:20.188: INFO: Creating deployment "test-recreate-deployment"
Jul  7 16:42:20.238: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jul  7 16:42:20.845: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Jul  7 16:42:23.159: INFO: Waiting deployment "test-recreate-deployment" to complete
Jul  7 16:42:24.321: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729736940, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729736940, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729736941, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729736940, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 16:42:26.654: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729736940, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729736940, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729736941, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729736940, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 16:42:28.324: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729736940, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729736940, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729736941, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729736940, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 16:42:30.909: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jul  7 16:42:30.950: INFO: Updating deployment test-recreate-deployment
Jul  7 16:42:30.950: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Jul  7 16:42:35.639: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-3392 /apis/apps/v1/namespaces/deployment-3392/deployments/test-recreate-deployment 1f9082e9-7a82-4a0e-8995-a070390cc3a9 947901 2 2020-07-07 16:42:20 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00473c428  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-07-07 16:42:35 +0000 UTC,LastTransitionTime:2020-07-07 16:42:35 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-07-07 16:42:35 +0000 UTC,LastTransitionTime:2020-07-07 16:42:20 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Jul  7 16:42:35.735: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff  deployment-3392 /apis/apps/v1/namespaces/deployment-3392/replicasets/test-recreate-deployment-5f94c574ff 94a8fd97-792c-4815-bd6d-b12ca2f9fbbf 947897 1 2020-07-07 16:42:33 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 1f9082e9-7a82-4a0e-8995-a070390cc3a9 0xc0045fa107 0xc0045fa108}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0045fa168  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jul  7 16:42:35.735: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jul  7 16:42:35.735: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856  deployment-3392 /apis/apps/v1/namespaces/deployment-3392/replicasets/test-recreate-deployment-799c574856 02b9ebb8-4f4f-4a43-9677-39582be26101 947884 2 2020-07-07 16:42:20 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 1f9082e9-7a82-4a0e-8995-a070390cc3a9 0xc0045fa1d7 0xc0045fa1d8}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0045fa248  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jul  7 16:42:35.738: INFO: Pod "test-recreate-deployment-5f94c574ff-nvkqw" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-nvkqw test-recreate-deployment-5f94c574ff- deployment-3392 /api/v1/namespaces/deployment-3392/pods/test-recreate-deployment-5f94c574ff-nvkqw 8e3d633e-d42f-4692-90d3-f3c107b8f519 947899 0 2020-07-07 16:42:33 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 94a8fd97-792c-4815-bd6d-b12ca2f9fbbf 0xc00458c0f7 0xc00458c0f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k6lp4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k6lp4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k6lp4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:42:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:42:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:42:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:42:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-07-07 16:42:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:42:35.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3392" for this suite.

• [SLOW TEST:16.602 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":208,"skipped":3603,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:42:35.822: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0707 16:43:09.546246       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul  7 16:43:09.546: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:43:09.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3249" for this suite.

• [SLOW TEST:33.854 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":209,"skipped":3617,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:43:09.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[BeforeEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jul  7 16:43:09.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-2244'
Jul  7 16:43:09.977: INFO: stderr: ""
Jul  7 16:43:09.977: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Jul  7 16:43:20.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-2244 -o json'
Jul  7 16:43:26.671: INFO: stderr: ""
Jul  7 16:43:26.671: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-07-07T16:43:09Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-2244\",\n        \"resourceVersion\": \"948093\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-2244/pods/e2e-test-httpd-pod\",\n        \"uid\": \"753b93dd-e7a5-4000-ab8b-1595f5988a09\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-s9r6d\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"jerma-worker2\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-s9r6d\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-s9r6d\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-07-07T16:43:09Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-07-07T16:43:16Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-07-07T16:43:16Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-07-07T16:43:09Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://2e248a537819baf7d64486906db763ff958c782ccad7b87f71f0534380bbca4f\",\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-07-07T16:43:15Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.17.0.8\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.2.84\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.244.2.84\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-07-07T16:43:09Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jul  7 16:43:26.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-2244'
Jul  7 16:43:35.620: INFO: stderr: ""
Jul  7 16:43:35.620: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795
Jul  7 16:43:35.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-2244'
Jul  7 16:43:41.689: INFO: stderr: ""
Jul  7 16:43:41.689: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:43:41.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2244" for this suite.

• [SLOW TEST:32.045 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1786
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":278,"completed":210,"skipped":3625,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:43:41.722: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  7 16:43:43.630: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"cdfb78ab-69b3-4db2-928f-f8d41332cc71", Controller:(*bool)(0xc00438818a), BlockOwnerDeletion:(*bool)(0xc00438818b)}}
Jul  7 16:43:43.700: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"3ddfd0a2-5203-4dde-a01e-a58f9c537d9f", Controller:(*bool)(0xc0043b6756), BlockOwnerDeletion:(*bool)(0xc0043b6757)}}
Jul  7 16:43:43.894: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"e238963a-c544-4d55-817f-bb58d1a84b0e", Controller:(*bool)(0xc004388336), BlockOwnerDeletion:(*bool)(0xc004388337)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:43:54.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-601" for this suite.

• [SLOW TEST:12.642 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":211,"skipped":3641,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:43:54.365: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:44:46.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8927" for this suite.

• [SLOW TEST:52.050 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":212,"skipped":3651,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:44:46.416: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting the proxy server
Jul  7 16:44:46.757: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:44:46.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3300" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":278,"completed":213,"skipped":3681,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:44:46.835: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul  7 16:44:47.625: INFO: Waiting up to 5m0s for pod "downwardapi-volume-63a15386-bc73-4e6c-ac79-f373df713f55" in namespace "projected-1679" to be "success or failure"
Jul  7 16:44:47.666: INFO: Pod "downwardapi-volume-63a15386-bc73-4e6c-ac79-f373df713f55": Phase="Pending", Reason="", readiness=false. Elapsed: 40.991266ms
Jul  7 16:44:49.841: INFO: Pod "downwardapi-volume-63a15386-bc73-4e6c-ac79-f373df713f55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216158241s
Jul  7 16:44:51.928: INFO: Pod "downwardapi-volume-63a15386-bc73-4e6c-ac79-f373df713f55": Phase="Pending", Reason="", readiness=false. Elapsed: 4.303039756s
Jul  7 16:44:54.098: INFO: Pod "downwardapi-volume-63a15386-bc73-4e6c-ac79-f373df713f55": Phase="Pending", Reason="", readiness=false. Elapsed: 6.473034165s
Jul  7 16:44:56.608: INFO: Pod "downwardapi-volume-63a15386-bc73-4e6c-ac79-f373df713f55": Phase="Running", Reason="", readiness=true. Elapsed: 8.98265217s
Jul  7 16:44:58.822: INFO: Pod "downwardapi-volume-63a15386-bc73-4e6c-ac79-f373df713f55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.197245271s
STEP: Saw pod success
Jul  7 16:44:58.823: INFO: Pod "downwardapi-volume-63a15386-bc73-4e6c-ac79-f373df713f55" satisfied condition "success or failure"
Jul  7 16:44:59.397: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-63a15386-bc73-4e6c-ac79-f373df713f55 container client-container: 
STEP: delete the pod
Jul  7 16:45:00.572: INFO: Waiting for pod downwardapi-volume-63a15386-bc73-4e6c-ac79-f373df713f55 to disappear
Jul  7 16:45:01.301: INFO: Pod downwardapi-volume-63a15386-bc73-4e6c-ac79-f373df713f55 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:45:01.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1679" for this suite.

• [SLOW TEST:15.038 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":214,"skipped":3691,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:45:01.874: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6858.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-6858.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6858.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6858.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-6858.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6858.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul  7 16:45:18.846: INFO: DNS probes using dns-6858/dns-test-c2505c5b-e56a-4b2b-8e2c-701f00bdca76 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:45:19.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6858" for this suite.

• [SLOW TEST:18.352 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":215,"skipped":3712,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:45:20.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Jul  7 16:45:21.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6713'
Jul  7 16:45:23.163: INFO: stderr: ""
Jul  7 16:45:23.163: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jul  7 16:45:24.224: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  7 16:45:24.224: INFO: Found 0 / 1
Jul  7 16:45:25.167: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  7 16:45:25.167: INFO: Found 0 / 1
Jul  7 16:45:26.357: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  7 16:45:26.357: INFO: Found 0 / 1
Jul  7 16:45:27.169: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  7 16:45:27.169: INFO: Found 0 / 1
Jul  7 16:45:28.167: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  7 16:45:28.167: INFO: Found 1 / 1
Jul  7 16:45:28.167: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jul  7 16:45:28.171: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  7 16:45:28.171: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jul  7 16:45:28.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-t6dfg --namespace=kubectl-6713 -p {"metadata":{"annotations":{"x":"y"}}}'
Jul  7 16:45:28.267: INFO: stderr: ""
Jul  7 16:45:28.267: INFO: stdout: "pod/agnhost-master-t6dfg patched\n"
STEP: checking annotations
Jul  7 16:45:28.276: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  7 16:45:28.276: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:45:28.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6713" for this suite.

• [SLOW TEST:8.055 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1432
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":278,"completed":216,"skipped":3823,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:45:28.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on tmpfs
Jul  7 16:45:28.668: INFO: Waiting up to 5m0s for pod "pod-14a22db0-f57f-437b-bdbe-5056ee23efa7" in namespace "emptydir-3929" to be "success or failure"
Jul  7 16:45:28.684: INFO: Pod "pod-14a22db0-f57f-437b-bdbe-5056ee23efa7": Phase="Pending", Reason="", readiness=false. Elapsed: 15.084121ms
Jul  7 16:45:31.032: INFO: Pod "pod-14a22db0-f57f-437b-bdbe-5056ee23efa7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.363930927s
Jul  7 16:45:33.051: INFO: Pod "pod-14a22db0-f57f-437b-bdbe-5056ee23efa7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.382115861s
STEP: Saw pod success
Jul  7 16:45:33.051: INFO: Pod "pod-14a22db0-f57f-437b-bdbe-5056ee23efa7" satisfied condition "success or failure"
Jul  7 16:45:33.053: INFO: Trying to get logs from node jerma-worker pod pod-14a22db0-f57f-437b-bdbe-5056ee23efa7 container test-container: 
STEP: delete the pod
Jul  7 16:45:33.434: INFO: Waiting for pod pod-14a22db0-f57f-437b-bdbe-5056ee23efa7 to disappear
Jul  7 16:45:33.517: INFO: Pod pod-14a22db0-f57f-437b-bdbe-5056ee23efa7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:45:33.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3929" for this suite.

• [SLOW TEST:5.330 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":217,"skipped":3846,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:45:33.614: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4712.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4712.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4712.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4712.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul  7 16:45:41.928: INFO: DNS probes using dns-test-8fc6d46c-a1a5-46ba-a353-a4daac59cb6a succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4712.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4712.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4712.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4712.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul  7 16:45:55.350: INFO: File wheezy_udp@dns-test-service-3.dns-4712.svc.cluster.local from pod  dns-4712/dns-test-4f48a804-91eb-4538-b70b-fd5172fe7d74 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul  7 16:45:55.353: INFO: File jessie_udp@dns-test-service-3.dns-4712.svc.cluster.local from pod  dns-4712/dns-test-4f48a804-91eb-4538-b70b-fd5172fe7d74 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul  7 16:45:55.353: INFO: Lookups using dns-4712/dns-test-4f48a804-91eb-4538-b70b-fd5172fe7d74 failed for: [wheezy_udp@dns-test-service-3.dns-4712.svc.cluster.local jessie_udp@dns-test-service-3.dns-4712.svc.cluster.local]

Jul  7 16:46:00.539: INFO: File wheezy_udp@dns-test-service-3.dns-4712.svc.cluster.local from pod  dns-4712/dns-test-4f48a804-91eb-4538-b70b-fd5172fe7d74 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul  7 16:46:00.559: INFO: File jessie_udp@dns-test-service-3.dns-4712.svc.cluster.local from pod  dns-4712/dns-test-4f48a804-91eb-4538-b70b-fd5172fe7d74 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul  7 16:46:00.559: INFO: Lookups using dns-4712/dns-test-4f48a804-91eb-4538-b70b-fd5172fe7d74 failed for: [wheezy_udp@dns-test-service-3.dns-4712.svc.cluster.local jessie_udp@dns-test-service-3.dns-4712.svc.cluster.local]

Jul  7 16:46:05.357: INFO: File wheezy_udp@dns-test-service-3.dns-4712.svc.cluster.local from pod  dns-4712/dns-test-4f48a804-91eb-4538-b70b-fd5172fe7d74 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul  7 16:46:05.360: INFO: File jessie_udp@dns-test-service-3.dns-4712.svc.cluster.local from pod  dns-4712/dns-test-4f48a804-91eb-4538-b70b-fd5172fe7d74 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul  7 16:46:05.360: INFO: Lookups using dns-4712/dns-test-4f48a804-91eb-4538-b70b-fd5172fe7d74 failed for: [wheezy_udp@dns-test-service-3.dns-4712.svc.cluster.local jessie_udp@dns-test-service-3.dns-4712.svc.cluster.local]

Jul  7 16:46:10.363: INFO: DNS probes using dns-test-4f48a804-91eb-4538-b70b-fd5172fe7d74 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4712.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4712.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4712.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-4712.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul  7 16:46:29.243: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-4712.svc.cluster.local from pod dns-4712/dns-test-ba268dc5-f7f1-4561-b949-58a3e20c197d: Get https://172.30.12.66:32777/api/v1/namespaces/dns-4712/pods/dns-test-ba268dc5-f7f1-4561-b949-58a3e20c197d/proxy/results/wheezy_udp@dns-test-service-3.dns-4712.svc.cluster.local: stream error: stream ID 899; INTERNAL_ERROR
Jul  7 16:46:29.249: INFO: File jessie_udp@dns-test-service-3.dns-4712.svc.cluster.local from pod  dns-4712/dns-test-ba268dc5-f7f1-4561-b949-58a3e20c197d contains '' instead of '10.101.39.206'
Jul  7 16:46:29.249: INFO: Lookups using dns-4712/dns-test-ba268dc5-f7f1-4561-b949-58a3e20c197d failed for: [wheezy_udp@dns-test-service-3.dns-4712.svc.cluster.local jessie_udp@dns-test-service-3.dns-4712.svc.cluster.local]

Jul  7 16:46:34.337: INFO: DNS probes using dns-test-ba268dc5-f7f1-4561-b949-58a3e20c197d succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:46:34.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4712" for this suite.

• [SLOW TEST:61.485 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":218,"skipped":3849,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:46:35.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  7 16:46:35.318: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jul  7 16:46:35.432: INFO: Pod name sample-pod: Found 0 pods out of 1
Jul  7 16:46:40.675: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jul  7 16:46:42.704: INFO: Creating deployment "test-rolling-update-deployment"
Jul  7 16:46:42.735: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jul  7 16:46:42.927: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jul  7 16:46:44.969: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jul  7 16:46:45.010: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737203, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737203, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737203, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737202, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 16:46:47.512: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737203, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737203, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737203, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737202, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 16:46:49.304: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Jul  7 16:46:49.490: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-1221 /apis/apps/v1/namespaces/deployment-1221/deployments/test-rolling-update-deployment 118455ae-fa95-419e-8454-79d0b832792d 949083 1 2020-07-07 16:46:42 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00435bcf8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-07-07 16:46:43 +0000 UTC,LastTransitionTime:2020-07-07 16:46:43 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-07-07 16:46:48 +0000 UTC,LastTransitionTime:2020-07-07 16:46:42 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Jul  7 16:46:49.493: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444  deployment-1221 /apis/apps/v1/namespaces/deployment-1221/replicasets/test-rolling-update-deployment-67cf4f6444 596770c1-9625-4a4a-908f-bcec91fb8461 949071 1 2020-07-07 16:46:42 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 118455ae-fa95-419e-8454-79d0b832792d 0xc0041f0cd7 0xc0041f0cd8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0041f0d48  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jul  7 16:46:49.493: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jul  7 16:46:49.493: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-1221 /apis/apps/v1/namespaces/deployment-1221/replicasets/test-rolling-update-controller 5b385df3-92d3-436c-b427-287fe5c78ac4 949081 2 2020-07-07 16:46:35 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 118455ae-fa95-419e-8454-79d0b832792d 0xc0041f0c07 0xc0041f0c08}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0041f0c68  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jul  7 16:46:49.496: INFO: Pod "test-rolling-update-deployment-67cf4f6444-kfhvj" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-kfhvj test-rolling-update-deployment-67cf4f6444- deployment-1221 /api/v1/namespaces/deployment-1221/pods/test-rolling-update-deployment-67cf4f6444-kfhvj bde9ec25-ad8f-40c3-90bb-6d0683720d1f 949070 0 2020-07-07 16:46:42 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 596770c1-9625-4a4a-908f-bcec91fb8461 0xc0043b60d7 0xc0043b60d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zwvfn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zwvfn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zwvfn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:46:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:46:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:46:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-07 16:46:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.93,StartTime:2020-07-07 16:46:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-07 16:46:47 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://a8ae06480412d1664394477c88499db5f688fbdce4850315208b1e576db1ca38,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.93,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:46:49.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1221" for this suite.

• [SLOW TEST:14.404 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":219,"skipped":3855,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:46:49.503: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating secret secrets-698/secret-test-63d35109-59af-47a9-9ccf-3e03fe74852f
STEP: Creating a pod to test consume secrets
Jul  7 16:46:50.384: INFO: Waiting up to 5m0s for pod "pod-configmaps-fd1e56d5-f309-419e-ba13-7e685e28bdb0" in namespace "secrets-698" to be "success or failure"
Jul  7 16:46:50.429: INFO: Pod "pod-configmaps-fd1e56d5-f309-419e-ba13-7e685e28bdb0": Phase="Pending", Reason="", readiness=false. Elapsed: 44.757069ms
Jul  7 16:46:52.561: INFO: Pod "pod-configmaps-fd1e56d5-f309-419e-ba13-7e685e28bdb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.17630707s
Jul  7 16:46:55.016: INFO: Pod "pod-configmaps-fd1e56d5-f309-419e-ba13-7e685e28bdb0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.631927329s
Jul  7 16:46:57.195: INFO: Pod "pod-configmaps-fd1e56d5-f309-419e-ba13-7e685e28bdb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.811300014s
STEP: Saw pod success
Jul  7 16:46:57.196: INFO: Pod "pod-configmaps-fd1e56d5-f309-419e-ba13-7e685e28bdb0" satisfied condition "success or failure"
Jul  7 16:46:57.465: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-fd1e56d5-f309-419e-ba13-7e685e28bdb0 container env-test: 
STEP: delete the pod
Jul  7 16:46:58.279: INFO: Waiting for pod pod-configmaps-fd1e56d5-f309-419e-ba13-7e685e28bdb0 to disappear
Jul  7 16:46:59.394: INFO: Pod pod-configmaps-fd1e56d5-f309-419e-ba13-7e685e28bdb0 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:46:59.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-698" for this suite.

• [SLOW TEST:9.933 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":220,"skipped":3856,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:46:59.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jul  7 16:47:00.583: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-4862 /api/v1/namespaces/watch-4862/configmaps/e2e-watch-test-resource-version bb8f1b82-a1a0-4e10-8951-66d5a7028098 949156 0 2020-07-07 16:47:00 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul  7 16:47:00.584: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-4862 /api/v1/namespaces/watch-4862/configmaps/e2e-watch-test-resource-version bb8f1b82-a1a0-4e10-8951-66d5a7028098 949157 0 2020-07-07 16:47:00 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:47:00.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4862" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":221,"skipped":3872,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:47:00.591: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  7 16:47:02.810: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  7 16:47:05.990: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737223, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737223, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737223, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737222, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 16:47:08.358: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737223, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737223, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737223, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737222, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 16:47:10.078: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737223, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737223, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737223, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737222, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 16:47:11.994: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737223, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737223, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737223, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737222, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  7 16:47:15.669: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  7 16:47:15.672: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4797-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:47:20.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3599" for this suite.
STEP: Destroying namespace "webhook-3599-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:20.024 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":222,"skipped":3888,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:47:20.616: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating server pod server in namespace prestop-1000
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-1000
STEP: Deleting pre-stop pod
Jul  7 16:47:36.604: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:47:36.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-1000" for this suite.

• [SLOW TEST:16.181 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":278,"completed":223,"skipped":3895,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:47:36.797: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Jul  7 16:47:37.492: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:47:53.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2982" for this suite.

• [SLOW TEST:16.626 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":224,"skipped":3898,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:47:53.422: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  7 16:47:53.600: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jul  7 16:47:53.801: INFO: Number of nodes with available pods: 0
Jul  7 16:47:53.801: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jul  7 16:47:53.987: INFO: Number of nodes with available pods: 0
Jul  7 16:47:53.987: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  7 16:47:55.246: INFO: Number of nodes with available pods: 0
Jul  7 16:47:55.246: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  7 16:47:56.235: INFO: Number of nodes with available pods: 0
Jul  7 16:47:56.235: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  7 16:47:57.119: INFO: Number of nodes with available pods: 0
Jul  7 16:47:57.119: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  7 16:47:57.991: INFO: Number of nodes with available pods: 0
Jul  7 16:47:57.991: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  7 16:47:59.024: INFO: Number of nodes with available pods: 0
Jul  7 16:47:59.024: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  7 16:47:59.991: INFO: Number of nodes with available pods: 0
Jul  7 16:47:59.991: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  7 16:48:00.991: INFO: Number of nodes with available pods: 1
Jul  7 16:48:00.991: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jul  7 16:48:01.255: INFO: Number of nodes with available pods: 1
Jul  7 16:48:01.255: INFO: Number of running nodes: 0, number of available pods: 1
Jul  7 16:48:02.628: INFO: Number of nodes with available pods: 0
Jul  7 16:48:02.628: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jul  7 16:48:02.725: INFO: Number of nodes with available pods: 0
Jul  7 16:48:02.725: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  7 16:48:03.779: INFO: Number of nodes with available pods: 0
Jul  7 16:48:03.779: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  7 16:48:04.729: INFO: Number of nodes with available pods: 0
Jul  7 16:48:04.729: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  7 16:48:05.755: INFO: Number of nodes with available pods: 0
Jul  7 16:48:05.755: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  7 16:48:06.730: INFO: Number of nodes with available pods: 0
Jul  7 16:48:06.730: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  7 16:48:07.729: INFO: Number of nodes with available pods: 0
Jul  7 16:48:07.729: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  7 16:48:08.730: INFO: Number of nodes with available pods: 0
Jul  7 16:48:08.730: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  7 16:48:09.730: INFO: Number of nodes with available pods: 0
Jul  7 16:48:09.730: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  7 16:48:10.729: INFO: Number of nodes with available pods: 0
Jul  7 16:48:10.729: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  7 16:48:11.728: INFO: Number of nodes with available pods: 0
Jul  7 16:48:11.728: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  7 16:48:12.730: INFO: Number of nodes with available pods: 0
Jul  7 16:48:12.730: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  7 16:48:13.768: INFO: Number of nodes with available pods: 0
Jul  7 16:48:13.768: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  7 16:48:14.729: INFO: Number of nodes with available pods: 0
Jul  7 16:48:14.729: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  7 16:48:15.728: INFO: Number of nodes with available pods: 0
Jul  7 16:48:15.728: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  7 16:48:16.728: INFO: Number of nodes with available pods: 0
Jul  7 16:48:16.728: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  7 16:48:17.729: INFO: Number of nodes with available pods: 0
Jul  7 16:48:17.729: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  7 16:48:19.108: INFO: Number of nodes with available pods: 0
Jul  7 16:48:19.108: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  7 16:48:19.863: INFO: Number of nodes with available pods: 0
Jul  7 16:48:19.863: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  7 16:48:20.729: INFO: Number of nodes with available pods: 0
Jul  7 16:48:20.729: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  7 16:48:21.747: INFO: Number of nodes with available pods: 0
Jul  7 16:48:21.747: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  7 16:48:22.729: INFO: Number of nodes with available pods: 0
Jul  7 16:48:22.729: INFO: Node jerma-worker2 is running more than one daemon pod
Jul  7 16:48:25.070: INFO: Number of nodes with available pods: 1
Jul  7 16:48:25.070: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4380, will wait for the garbage collector to delete the pods
Jul  7 16:48:25.885: INFO: Deleting DaemonSet.extensions daemon-set took: 18.432793ms
Jul  7 16:48:26.185: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.332368ms
Jul  7 16:48:29.490: INFO: Number of nodes with available pods: 0
Jul  7 16:48:29.490: INFO: Number of running nodes: 0, number of available pods: 0
Jul  7 16:48:29.493: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4380/daemonsets","resourceVersion":"949602"},"items":null}

Jul  7 16:48:29.496: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4380/pods","resourceVersion":"949602"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:48:29.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4380" for this suite.

• [SLOW TEST:36.186 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":225,"skipped":3910,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:48:29.609: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-2129
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-2129
I0707 16:48:30.702098       6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-2129, replica count: 2
I0707 16:48:33.752619       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0707 16:48:36.752874       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0707 16:48:39.753043       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0707 16:48:42.753435       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul  7 16:48:42.753: INFO: Creating new exec pod
Jul  7 16:49:00.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2129 execpod5zwjs -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Jul  7 16:49:00.707: INFO: stderr: "I0707 16:49:00.620905    2568 log.go:172] (0xc000cb54a0) (0xc000bf66e0) Create stream\nI0707 16:49:00.620941    2568 log.go:172] (0xc000cb54a0) (0xc000bf66e0) Stream added, broadcasting: 1\nI0707 16:49:00.625664    2568 log.go:172] (0xc000cb54a0) Reply frame received for 1\nI0707 16:49:00.625706    2568 log.go:172] (0xc000cb54a0) (0xc0005e2780) Create stream\nI0707 16:49:00.625715    2568 log.go:172] (0xc000cb54a0) (0xc0005e2780) Stream added, broadcasting: 3\nI0707 16:49:00.626603    2568 log.go:172] (0xc000cb54a0) Reply frame received for 3\nI0707 16:49:00.626645    2568 log.go:172] (0xc000cb54a0) (0xc0007774a0) Create stream\nI0707 16:49:00.626654    2568 log.go:172] (0xc000cb54a0) (0xc0007774a0) Stream added, broadcasting: 5\nI0707 16:49:00.627494    2568 log.go:172] (0xc000cb54a0) Reply frame received for 5\nI0707 16:49:00.699386    2568 log.go:172] (0xc000cb54a0) Data frame received for 5\nI0707 16:49:00.699495    2568 log.go:172] (0xc0007774a0) (5) Data frame handling\nI0707 16:49:00.699549    2568 log.go:172] (0xc0007774a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0707 16:49:00.699856    2568 log.go:172] (0xc000cb54a0) Data frame received for 5\nI0707 16:49:00.699876    2568 log.go:172] (0xc0007774a0) (5) Data frame handling\nI0707 16:49:00.699897    2568 log.go:172] (0xc0007774a0) (5) Data frame sent\nI0707 16:49:00.699908    2568 log.go:172] (0xc000cb54a0) Data frame received for 5\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0707 16:49:00.699916    2568 log.go:172] (0xc0007774a0) (5) Data frame handling\nI0707 16:49:00.701601    2568 log.go:172] (0xc000cb54a0) Data frame received for 3\nI0707 16:49:00.701621    2568 log.go:172] (0xc0005e2780) (3) Data frame handling\nI0707 16:49:00.702925    2568 log.go:172] (0xc000cb54a0) Data frame received for 1\nI0707 16:49:00.702947    2568 log.go:172] (0xc000bf66e0) (1) Data frame handling\nI0707 16:49:00.702955    2568 log.go:172] (0xc000bf66e0) (1) Data frame sent\nI0707 16:49:00.702967    2568 log.go:172] (0xc000cb54a0) (0xc000bf66e0) Stream removed, broadcasting: 1\nI0707 16:49:00.703014    2568 log.go:172] (0xc000cb54a0) Go away received\nI0707 16:49:00.703213    2568 log.go:172] (0xc000cb54a0) (0xc000bf66e0) Stream removed, broadcasting: 1\nI0707 16:49:00.703232    2568 log.go:172] (0xc000cb54a0) (0xc0005e2780) Stream removed, broadcasting: 3\nI0707 16:49:00.703239    2568 log.go:172] (0xc000cb54a0) (0xc0007774a0) Stream removed, broadcasting: 5\n"
Jul  7 16:49:00.707: INFO: stdout: ""
Jul  7 16:49:00.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2129 execpod5zwjs -- /bin/sh -x -c nc -zv -t -w 2 10.103.86.34 80'
Jul  7 16:49:00.899: INFO: stderr: "I0707 16:49:00.833645    2591 log.go:172] (0xc0000f51e0) (0xc00060fb80) Create stream\nI0707 16:49:00.833711    2591 log.go:172] (0xc0000f51e0) (0xc00060fb80) Stream added, broadcasting: 1\nI0707 16:49:00.836495    2591 log.go:172] (0xc0000f51e0) Reply frame received for 1\nI0707 16:49:00.836546    2591 log.go:172] (0xc0000f51e0) (0xc0009ec000) Create stream\nI0707 16:49:00.836560    2591 log.go:172] (0xc0000f51e0) (0xc0009ec000) Stream added, broadcasting: 3\nI0707 16:49:00.837909    2591 log.go:172] (0xc0000f51e0) Reply frame received for 3\nI0707 16:49:00.837949    2591 log.go:172] (0xc0000f51e0) (0xc0008d8000) Create stream\nI0707 16:49:00.837963    2591 log.go:172] (0xc0000f51e0) (0xc0008d8000) Stream added, broadcasting: 5\nI0707 16:49:00.838789    2591 log.go:172] (0xc0000f51e0) Reply frame received for 5\nI0707 16:49:00.891388    2591 log.go:172] (0xc0000f51e0) Data frame received for 3\nI0707 16:49:00.891414    2591 log.go:172] (0xc0009ec000) (3) Data frame handling\nI0707 16:49:00.891462    2591 log.go:172] (0xc0000f51e0) Data frame received for 5\nI0707 16:49:00.891496    2591 log.go:172] (0xc0008d8000) (5) Data frame handling\nI0707 16:49:00.891519    2591 log.go:172] (0xc0008d8000) (5) Data frame sent\nI0707 16:49:00.891528    2591 log.go:172] (0xc0000f51e0) Data frame received for 5\n+ nc -zv -t -w 2 10.103.86.34 80\nConnection to 10.103.86.34 80 port [tcp/http] succeeded!\nI0707 16:49:00.891546    2591 log.go:172] (0xc0008d8000) (5) Data frame handling\nI0707 16:49:00.892724    2591 log.go:172] (0xc0000f51e0) Data frame received for 1\nI0707 16:49:00.892744    2591 log.go:172] (0xc00060fb80) (1) Data frame handling\nI0707 16:49:00.892754    2591 log.go:172] (0xc00060fb80) (1) Data frame sent\nI0707 16:49:00.892766    2591 log.go:172] (0xc0000f51e0) (0xc00060fb80) Stream removed, broadcasting: 1\nI0707 16:49:00.892778    2591 log.go:172] (0xc0000f51e0) Go away received\nI0707 16:49:00.893098    2591 log.go:172] (0xc0000f51e0) (0xc00060fb80) Stream removed, broadcasting: 1\nI0707 16:49:00.893325    2591 log.go:172] (0xc0000f51e0) (0xc0009ec000) Stream removed, broadcasting: 3\nI0707 16:49:00.893342    2591 log.go:172] (0xc0000f51e0) (0xc0008d8000) Stream removed, broadcasting: 5\n"
Jul  7 16:49:00.899: INFO: stdout: ""
Jul  7 16:49:00.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2129 execpod5zwjs -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 32179'
Jul  7 16:49:01.095: INFO: stderr: "I0707 16:49:01.016376    2613 log.go:172] (0xc000ac1d90) (0xc00070bf40) Create stream\nI0707 16:49:01.016428    2613 log.go:172] (0xc000ac1d90) (0xc00070bf40) Stream added, broadcasting: 1\nI0707 16:49:01.018070    2613 log.go:172] (0xc000ac1d90) Reply frame received for 1\nI0707 16:49:01.018098    2613 log.go:172] (0xc000ac1d90) (0xc000ab26e0) Create stream\nI0707 16:49:01.018105    2613 log.go:172] (0xc000ac1d90) (0xc000ab26e0) Stream added, broadcasting: 3\nI0707 16:49:01.018770    2613 log.go:172] (0xc000ac1d90) Reply frame received for 3\nI0707 16:49:01.018810    2613 log.go:172] (0xc000ac1d90) (0xc000b44280) Create stream\nI0707 16:49:01.018823    2613 log.go:172] (0xc000ac1d90) (0xc000b44280) Stream added, broadcasting: 5\nI0707 16:49:01.019498    2613 log.go:172] (0xc000ac1d90) Reply frame received for 5\nI0707 16:49:01.086678    2613 log.go:172] (0xc000ac1d90) Data frame received for 3\nI0707 16:49:01.086736    2613 log.go:172] (0xc000ab26e0) (3) Data frame handling\nI0707 16:49:01.086763    2613 log.go:172] (0xc000ac1d90) Data frame received for 5\nI0707 16:49:01.086776    2613 log.go:172] (0xc000b44280) (5) Data frame handling\nI0707 16:49:01.086791    2613 log.go:172] (0xc000b44280) (5) Data frame sent\nI0707 16:49:01.086802    2613 log.go:172] (0xc000ac1d90) Data frame received for 5\nI0707 16:49:01.086811    2613 log.go:172] (0xc000b44280) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 32179\nConnection to 172.17.0.10 32179 port [tcp/32179] succeeded!\nI0707 16:49:01.089646    2613 log.go:172] (0xc000ac1d90) Data frame received for 1\nI0707 16:49:01.089668    2613 log.go:172] (0xc00070bf40) (1) Data frame handling\nI0707 16:49:01.089685    2613 log.go:172] (0xc00070bf40) (1) Data frame sent\nI0707 16:49:01.089699    2613 log.go:172] (0xc000ac1d90) (0xc00070bf40) Stream removed, broadcasting: 1\nI0707 16:49:01.089924    2613 log.go:172] (0xc000ac1d90) Go away received\nI0707 16:49:01.090022    2613 log.go:172] (0xc000ac1d90) (0xc00070bf40) Stream removed, broadcasting: 1\nI0707 16:49:01.090038    2613 log.go:172] (0xc000ac1d90) (0xc000ab26e0) Stream removed, broadcasting: 3\nI0707 16:49:01.090046    2613 log.go:172] (0xc000ac1d90) (0xc000b44280) Stream removed, broadcasting: 5\n"
Jul  7 16:49:01.095: INFO: stdout: ""
Jul  7 16:49:01.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2129 execpod5zwjs -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 32179'
Jul  7 16:49:02.200: INFO: stderr: "I0707 16:49:02.112497    2633 log.go:172] (0xc000504a50) (0xc0006dc3c0) Create stream\nI0707 16:49:02.112588    2633 log.go:172] (0xc000504a50) (0xc0006dc3c0) Stream added, broadcasting: 1\nI0707 16:49:02.116475    2633 log.go:172] (0xc000504a50) Reply frame received for 1\nI0707 16:49:02.116537    2633 log.go:172] (0xc000504a50) (0xc000884280) Create stream\nI0707 16:49:02.116557    2633 log.go:172] (0xc000504a50) (0xc000884280) Stream added, broadcasting: 3\nI0707 16:49:02.117889    2633 log.go:172] (0xc000504a50) Reply frame received for 3\nI0707 16:49:02.117919    2633 log.go:172] (0xc000504a50) (0xc0006dc500) Create stream\nI0707 16:49:02.117928    2633 log.go:172] (0xc000504a50) (0xc0006dc500) Stream added, broadcasting: 5\nI0707 16:49:02.119135    2633 log.go:172] (0xc000504a50) Reply frame received for 5\nI0707 16:49:02.188232    2633 log.go:172] (0xc000504a50) Data frame received for 5\nI0707 16:49:02.188272    2633 log.go:172] (0xc0006dc500) (5) Data frame handling\nI0707 16:49:02.188302    2633 log.go:172] (0xc0006dc500) (5) Data frame sent\nI0707 16:49:02.188317    2633 log.go:172] (0xc000504a50) Data frame received for 5\nI0707 16:49:02.188332    2633 log.go:172] (0xc0006dc500) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 32179\nConnection to 172.17.0.8 32179 port [tcp/32179] succeeded!\nI0707 16:49:02.189548    2633 log.go:172] (0xc000504a50) Data frame received for 3\nI0707 16:49:02.189582    2633 log.go:172] (0xc000884280) (3) Data frame handling\nI0707 16:49:02.195845    2633 log.go:172] (0xc000504a50) Data frame received for 1\nI0707 16:49:02.195860    2633 log.go:172] (0xc0006dc3c0) (1) Data frame handling\nI0707 16:49:02.195872    2633 log.go:172] (0xc0006dc3c0) (1) Data frame sent\nI0707 16:49:02.196135    2633 log.go:172] (0xc000504a50) (0xc0006dc3c0) Stream removed, broadcasting: 1\nI0707 16:49:02.196166    2633 log.go:172] (0xc000504a50) Go away received\nI0707 16:49:02.196508    2633 log.go:172] (0xc000504a50) (0xc0006dc3c0) Stream removed, broadcasting: 1\nI0707 16:49:02.196523    2633 log.go:172] (0xc000504a50) (0xc000884280) Stream removed, broadcasting: 3\nI0707 16:49:02.196529    2633 log.go:172] (0xc000504a50) (0xc0006dc500) Stream removed, broadcasting: 5\n"
Jul  7 16:49:02.200: INFO: stdout: ""
Jul  7 16:49:02.200: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:49:03.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2129" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:33.829 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":226,"skipped":3934,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:49:03.439: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  7 16:49:04.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jul  7 16:49:06.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2587 create -f -'
Jul  7 16:49:16.753: INFO: stderr: ""
Jul  7 16:49:16.753: INFO: stdout: "e2e-test-crd-publish-openapi-4891-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Jul  7 16:49:16.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2587 delete e2e-test-crd-publish-openapi-4891-crds test-cr'
Jul  7 16:49:16.887: INFO: stderr: ""
Jul  7 16:49:16.887: INFO: stdout: "e2e-test-crd-publish-openapi-4891-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Jul  7 16:49:16.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2587 apply -f -'
Jul  7 16:49:17.145: INFO: stderr: ""
Jul  7 16:49:17.145: INFO: stdout: "e2e-test-crd-publish-openapi-4891-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Jul  7 16:49:17.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2587 delete e2e-test-crd-publish-openapi-4891-crds test-cr'
Jul  7 16:49:17.266: INFO: stderr: ""
Jul  7 16:49:17.266: INFO: stdout: "e2e-test-crd-publish-openapi-4891-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Jul  7 16:49:17.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4891-crds'
Jul  7 16:49:17.532: INFO: stderr: ""
Jul  7 16:49:17.532: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4891-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:49:20.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2587" for this suite.

• [SLOW TEST:17.194 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":227,"skipped":3965,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:49:20.632: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-a254a6e7-431d-4fe2-bedd-a215a4484f87
STEP: Creating a pod to test consume secrets
Jul  7 16:49:21.262: INFO: Waiting up to 5m0s for pod "pod-secrets-7dbedc80-d342-4daf-9acb-6a2a8684508d" in namespace "secrets-458" to be "success or failure"
Jul  7 16:49:21.316: INFO: Pod "pod-secrets-7dbedc80-d342-4daf-9acb-6a2a8684508d": Phase="Pending", Reason="", readiness=false. Elapsed: 54.429455ms
Jul  7 16:49:23.320: INFO: Pod "pod-secrets-7dbedc80-d342-4daf-9acb-6a2a8684508d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058375476s
Jul  7 16:49:25.324: INFO: Pod "pod-secrets-7dbedc80-d342-4daf-9acb-6a2a8684508d": Phase="Running", Reason="", readiness=true. Elapsed: 4.062019849s
Jul  7 16:49:27.599: INFO: Pod "pod-secrets-7dbedc80-d342-4daf-9acb-6a2a8684508d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.337237739s
STEP: Saw pod success
Jul  7 16:49:27.599: INFO: Pod "pod-secrets-7dbedc80-d342-4daf-9acb-6a2a8684508d" satisfied condition "success or failure"
Jul  7 16:49:27.602: INFO: Trying to get logs from node jerma-worker pod pod-secrets-7dbedc80-d342-4daf-9acb-6a2a8684508d container secret-volume-test: 
STEP: delete the pod
Jul  7 16:49:27.780: INFO: Waiting for pod pod-secrets-7dbedc80-d342-4daf-9acb-6a2a8684508d to disappear
Jul  7 16:49:27.836: INFO: Pod pod-secrets-7dbedc80-d342-4daf-9acb-6a2a8684508d no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:49:27.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-458" for this suite.
STEP: Destroying namespace "secret-namespace-3219" for this suite.

• [SLOW TEST:7.241 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":228,"skipped":3965,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:49:27.873: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-downwardapi-2c2r
STEP: Creating a pod to test atomic-volume-subpath
Jul  7 16:49:28.162: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-2c2r" in namespace "subpath-2383" to be "success or failure"
Jul  7 16:49:28.277: INFO: Pod "pod-subpath-test-downwardapi-2c2r": Phase="Pending", Reason="", readiness=false. Elapsed: 114.076755ms
Jul  7 16:49:30.280: INFO: Pod "pod-subpath-test-downwardapi-2c2r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117927926s
Jul  7 16:49:32.285: INFO: Pod "pod-subpath-test-downwardapi-2c2r": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122420725s
Jul  7 16:49:34.330: INFO: Pod "pod-subpath-test-downwardapi-2c2r": Phase="Pending", Reason="", readiness=false. Elapsed: 6.167596329s
Jul  7 16:49:36.372: INFO: Pod "pod-subpath-test-downwardapi-2c2r": Phase="Running", Reason="", readiness=true. Elapsed: 8.209306442s
Jul  7 16:49:38.376: INFO: Pod "pod-subpath-test-downwardapi-2c2r": Phase="Running", Reason="", readiness=true. Elapsed: 10.214001252s
Jul  7 16:49:40.381: INFO: Pod "pod-subpath-test-downwardapi-2c2r": Phase="Running", Reason="", readiness=true. Elapsed: 12.218695938s
Jul  7 16:49:42.385: INFO: Pod "pod-subpath-test-downwardapi-2c2r": Phase="Running", Reason="", readiness=true. Elapsed: 14.222760388s
Jul  7 16:49:44.390: INFO: Pod "pod-subpath-test-downwardapi-2c2r": Phase="Running", Reason="", readiness=true. Elapsed: 16.227584982s
Jul  7 16:49:46.467: INFO: Pod "pod-subpath-test-downwardapi-2c2r": Phase="Running", Reason="", readiness=true. Elapsed: 18.30462763s
Jul  7 16:49:48.491: INFO: Pod "pod-subpath-test-downwardapi-2c2r": Phase="Running", Reason="", readiness=true. Elapsed: 20.329002327s
Jul  7 16:49:50.563: INFO: Pod "pod-subpath-test-downwardapi-2c2r": Phase="Running", Reason="", readiness=true. Elapsed: 22.40039244s
Jul  7 16:49:52.566: INFO: Pod "pod-subpath-test-downwardapi-2c2r": Phase="Running", Reason="", readiness=true. Elapsed: 24.404034411s
Jul  7 16:49:54.969: INFO: Pod "pod-subpath-test-downwardapi-2c2r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.806635483s
STEP: Saw pod success
Jul  7 16:49:54.969: INFO: Pod "pod-subpath-test-downwardapi-2c2r" satisfied condition "success or failure"
Jul  7 16:49:54.971: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-downwardapi-2c2r container test-container-subpath-downwardapi-2c2r: 
STEP: delete the pod
Jul  7 16:49:55.733: INFO: Waiting for pod pod-subpath-test-downwardapi-2c2r to disappear
Jul  7 16:49:55.813: INFO: Pod pod-subpath-test-downwardapi-2c2r no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-2c2r
Jul  7 16:49:55.813: INFO: Deleting pod "pod-subpath-test-downwardapi-2c2r" in namespace "subpath-2383"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:49:56.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2383" for this suite.

• [SLOW TEST:28.628 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":229,"skipped":3968,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:49:56.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jul  7 16:49:57.116: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul  7 16:49:57.446: INFO: Waiting for terminating namespaces to be deleted...
Jul  7 16:49:57.450: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Jul  7 16:49:57.454: INFO: kindnet-gnxwn from kube-system started at 2020-07-04 07:51:00 +0000 UTC (1 container statuses recorded)
Jul  7 16:49:57.454: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul  7 16:49:57.454: INFO: kube-proxy-8sp85 from kube-system started at 2020-07-04 07:51:00 +0000 UTC (1 container statuses recorded)
Jul  7 16:49:57.454: INFO: 	Container kube-proxy ready: true, restart count 0
Jul  7 16:49:57.454: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Jul  7 16:49:57.459: INFO: kindnet-qg8qr from kube-system started at 2020-07-04 07:51:01 +0000 UTC (1 container statuses recorded)
Jul  7 16:49:57.459: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul  7 16:49:57.459: INFO: kube-proxy-b2ncl from kube-system started at 2020-07-04 07:51:01 +0000 UTC (1 container statuses recorded)
Jul  7 16:49:57.459: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: verifying the node has the label node jerma-worker
STEP: verifying the node has the label node jerma-worker2
Jul  7 16:49:58.842: INFO: Pod kindnet-gnxwn requesting resource cpu=100m on Node jerma-worker
Jul  7 16:49:58.842: INFO: Pod kindnet-qg8qr requesting resource cpu=100m on Node jerma-worker2
Jul  7 16:49:58.842: INFO: Pod kube-proxy-8sp85 requesting resource cpu=0m on Node jerma-worker
Jul  7 16:49:58.842: INFO: Pod kube-proxy-b2ncl requesting resource cpu=0m on Node jerma-worker2
STEP: Starting Pods to consume most of the cluster CPU.
Jul  7 16:49:58.842: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker
Jul  7 16:49:58.890: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-094d4c4e-4ac8-4285-8d53-a4f56bdc6f27.161f866dcd7dc196], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1864/filler-pod-094d4c4e-4ac8-4285-8d53-a4f56bdc6f27 to jerma-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-094d4c4e-4ac8-4285-8d53-a4f56bdc6f27.161f866ff41e99f9], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-094d4c4e-4ac8-4285-8d53-a4f56bdc6f27.161f8671bb1cc6f8], Reason = [Created], Message = [Created container filler-pod-094d4c4e-4ac8-4285-8d53-a4f56bdc6f27]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-094d4c4e-4ac8-4285-8d53-a4f56bdc6f27.161f8671fc6b3acb], Reason = [Started], Message = [Started container filler-pod-094d4c4e-4ac8-4285-8d53-a4f56bdc6f27]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-162e9740-82d8-4b5c-aa29-a4f1483cb933.161f866dcee23e2c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1864/filler-pod-162e9740-82d8-4b5c-aa29-a4f1483cb933 to jerma-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-162e9740-82d8-4b5c-aa29-a4f1483cb933.161f866f7ad57012], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-162e9740-82d8-4b5c-aa29-a4f1483cb933.161f8670d7c39bd6], Reason = [Created], Message = [Created container filler-pod-162e9740-82d8-4b5c-aa29-a4f1483cb933]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-162e9740-82d8-4b5c-aa29-a4f1483cb933.161f867129069cf5], Reason = [Started], Message = [Started container filler-pod-162e9740-82d8-4b5c-aa29-a4f1483cb933]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.161f867277c4c80c], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node jerma-worker2
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node jerma-worker
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:50:20.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1864" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:24.020 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":278,"completed":230,"skipped":3969,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:50:20.522: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Jul  7 16:50:20.768: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:50:33.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7453" for this suite.

• [SLOW TEST:13.085 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":231,"skipped":3978,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:50:33.607: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  7 16:50:35.984: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  7 16:50:37.991: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737436, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737436, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737436, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737435, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 16:50:40.134: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737436, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737436, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737436, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737435, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 16:50:41.995: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737436, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737436, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737436, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737435, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  7 16:50:45.152: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Jul  7 16:50:53.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-194 to-be-attached-pod -i -c=container1'
Jul  7 16:50:53.521: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:50:53.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-194" for this suite.
STEP: Destroying namespace "webhook-194-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:21.855 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":232,"skipped":3998,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:50:55.463: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  7 16:50:55.628: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:50:59.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8990" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":278,"completed":233,"skipped":3999,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:50:59.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-bb6088d1-4633-4775-b0d5-4de710901a9b
STEP: Creating a pod to test consume configMaps
Jul  7 16:51:00.031: INFO: Waiting up to 5m0s for pod "pod-configmaps-bf9eb199-0bf8-4d3e-bd54-5e1de7710744" in namespace "configmap-3541" to be "success or failure"
Jul  7 16:51:01.787: INFO: Pod "pod-configmaps-bf9eb199-0bf8-4d3e-bd54-5e1de7710744": Phase="Pending", Reason="", readiness=false. Elapsed: 1.755859494s
Jul  7 16:51:03.811: INFO: Pod "pod-configmaps-bf9eb199-0bf8-4d3e-bd54-5e1de7710744": Phase="Pending", Reason="", readiness=false. Elapsed: 3.779537401s
Jul  7 16:51:06.804: INFO: Pod "pod-configmaps-bf9eb199-0bf8-4d3e-bd54-5e1de7710744": Phase="Pending", Reason="", readiness=false. Elapsed: 6.772355092s
Jul  7 16:51:08.984: INFO: Pod "pod-configmaps-bf9eb199-0bf8-4d3e-bd54-5e1de7710744": Phase="Pending", Reason="", readiness=false. Elapsed: 8.952869279s
Jul  7 16:51:11.042: INFO: Pod "pod-configmaps-bf9eb199-0bf8-4d3e-bd54-5e1de7710744": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.01054398s
STEP: Saw pod success
Jul  7 16:51:11.042: INFO: Pod "pod-configmaps-bf9eb199-0bf8-4d3e-bd54-5e1de7710744" satisfied condition "success or failure"
Jul  7 16:51:11.046: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-bf9eb199-0bf8-4d3e-bd54-5e1de7710744 container configmap-volume-test: 
STEP: delete the pod
Jul  7 16:51:12.374: INFO: Waiting for pod pod-configmaps-bf9eb199-0bf8-4d3e-bd54-5e1de7710744 to disappear
Jul  7 16:51:12.584: INFO: Pod pod-configmaps-bf9eb199-0bf8-4d3e-bd54-5e1de7710744 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:51:12.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3541" for this suite.

• [SLOW TEST:13.292 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":234,"skipped":4003,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:51:12.591: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:51:26.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5334" for this suite.

• [SLOW TEST:13.970 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":4005,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:51:26.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-a100d540-1c5f-41f2-b13b-b5ccb5f31a43
STEP: Creating a pod to test consume configMaps
Jul  7 16:51:28.742: INFO: Waiting up to 5m0s for pod "pod-configmaps-0ff5b254-f68f-4d4a-870a-61f95521a185" in namespace "configmap-1932" to be "success or failure"
Jul  7 16:51:28.990: INFO: Pod "pod-configmaps-0ff5b254-f68f-4d4a-870a-61f95521a185": Phase="Pending", Reason="", readiness=false. Elapsed: 247.665038ms
Jul  7 16:51:31.050: INFO: Pod "pod-configmaps-0ff5b254-f68f-4d4a-870a-61f95521a185": Phase="Pending", Reason="", readiness=false. Elapsed: 2.307278s
Jul  7 16:51:33.218: INFO: Pod "pod-configmaps-0ff5b254-f68f-4d4a-870a-61f95521a185": Phase="Pending", Reason="", readiness=false. Elapsed: 4.475251879s
Jul  7 16:51:35.728: INFO: Pod "pod-configmaps-0ff5b254-f68f-4d4a-870a-61f95521a185": Phase="Pending", Reason="", readiness=false. Elapsed: 6.985487029s
Jul  7 16:51:37.731: INFO: Pod "pod-configmaps-0ff5b254-f68f-4d4a-870a-61f95521a185": Phase="Running", Reason="", readiness=true. Elapsed: 8.988743993s
Jul  7 16:51:40.422: INFO: Pod "pod-configmaps-0ff5b254-f68f-4d4a-870a-61f95521a185": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.679841062s
STEP: Saw pod success
Jul  7 16:51:40.422: INFO: Pod "pod-configmaps-0ff5b254-f68f-4d4a-870a-61f95521a185" satisfied condition "success or failure"
Jul  7 16:51:40.425: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-0ff5b254-f68f-4d4a-870a-61f95521a185 container configmap-volume-test: 
STEP: delete the pod
Jul  7 16:51:41.725: INFO: Waiting for pod pod-configmaps-0ff5b254-f68f-4d4a-870a-61f95521a185 to disappear
Jul  7 16:51:41.828: INFO: Pod pod-configmaps-0ff5b254-f68f-4d4a-870a-61f95521a185 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:51:41.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1932" for this suite.

• [SLOW TEST:15.272 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":4013,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:51:41.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jul  7 16:51:50.023: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:51:50.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-833" for this suite.

• [SLOW TEST:8.757 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":4017,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:51:50.592: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Jul  7 16:51:58.907: INFO: Successfully updated pod "labelsupdate6779fd15-ff6d-4eda-9d6e-a6b3da0cce05"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:52:01.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3163" for this suite.

• [SLOW TEST:11.181 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":238,"skipped":4033,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:52:01.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:52:12.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2532" for this suite.

• [SLOW TEST:10.621 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":239,"skipped":4037,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:52:12.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul  7 16:52:13.683: INFO: Waiting up to 5m0s for pod "downwardapi-volume-88c47014-c1a8-4fea-a395-24133800772c" in namespace "projected-9665" to be "success or failure"
Jul  7 16:52:14.341: INFO: Pod "downwardapi-volume-88c47014-c1a8-4fea-a395-24133800772c": Phase="Pending", Reason="", readiness=false. Elapsed: 658.152843ms
Jul  7 16:52:16.597: INFO: Pod "downwardapi-volume-88c47014-c1a8-4fea-a395-24133800772c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.91427605s
Jul  7 16:52:18.600: INFO: Pod "downwardapi-volume-88c47014-c1a8-4fea-a395-24133800772c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.917092417s
Jul  7 16:52:20.830: INFO: Pod "downwardapi-volume-88c47014-c1a8-4fea-a395-24133800772c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.147495969s
Jul  7 16:52:22.955: INFO: Pod "downwardapi-volume-88c47014-c1a8-4fea-a395-24133800772c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.272309498s
Jul  7 16:52:25.231: INFO: Pod "downwardapi-volume-88c47014-c1a8-4fea-a395-24133800772c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.548056107s
STEP: Saw pod success
Jul  7 16:52:25.231: INFO: Pod "downwardapi-volume-88c47014-c1a8-4fea-a395-24133800772c" satisfied condition "success or failure"
Jul  7 16:52:25.235: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-88c47014-c1a8-4fea-a395-24133800772c container client-container: 
STEP: delete the pod
Jul  7 16:52:25.851: INFO: Waiting for pod downwardapi-volume-88c47014-c1a8-4fea-a395-24133800772c to disappear
Jul  7 16:52:25.920: INFO: Pod downwardapi-volume-88c47014-c1a8-4fea-a395-24133800772c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:52:25.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9665" for this suite.

• [SLOW TEST:13.644 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":240,"skipped":4044,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:52:26.039: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Jul  7 16:52:26.137: INFO: Waiting up to 5m0s for pod "pod-8d74a2a2-b08c-4991-b494-454cf34001eb" in namespace "emptydir-4661" to be "success or failure"
Jul  7 16:52:26.162: INFO: Pod "pod-8d74a2a2-b08c-4991-b494-454cf34001eb": Phase="Pending", Reason="", readiness=false. Elapsed: 25.745256ms
Jul  7 16:52:28.272: INFO: Pod "pod-8d74a2a2-b08c-4991-b494-454cf34001eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135247917s
Jul  7 16:52:30.721: INFO: Pod "pod-8d74a2a2-b08c-4991-b494-454cf34001eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.584871103s
Jul  7 16:52:32.724: INFO: Pod "pod-8d74a2a2-b08c-4991-b494-454cf34001eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.587303307s
STEP: Saw pod success
Jul  7 16:52:32.724: INFO: Pod "pod-8d74a2a2-b08c-4991-b494-454cf34001eb" satisfied condition "success or failure"
Jul  7 16:52:32.727: INFO: Trying to get logs from node jerma-worker pod pod-8d74a2a2-b08c-4991-b494-454cf34001eb container test-container: 
STEP: delete the pod
Jul  7 16:52:32.927: INFO: Waiting for pod pod-8d74a2a2-b08c-4991-b494-454cf34001eb to disappear
Jul  7 16:52:33.027: INFO: Pod pod-8d74a2a2-b08c-4991-b494-454cf34001eb no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:52:33.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4661" for this suite.

• [SLOW TEST:6.996 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":241,"skipped":4056,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:52:33.036: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[BeforeEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1525
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jul  7 16:52:38.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-4942'
Jul  7 16:52:38.527: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul  7 16:52:38.527: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created
STEP: confirm that you can get logs from an rc
Jul  7 16:52:39.452: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-t8j45]
Jul  7 16:52:39.452: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-t8j45" in namespace "kubectl-4942" to be "running and ready"
Jul  7 16:52:39.801: INFO: Pod "e2e-test-httpd-rc-t8j45": Phase="Pending", Reason="", readiness=false. Elapsed: 347.995019ms
Jul  7 16:52:42.455: INFO: Pod "e2e-test-httpd-rc-t8j45": Phase="Pending", Reason="", readiness=false. Elapsed: 3.002769895s
Jul  7 16:52:45.560: INFO: Pod "e2e-test-httpd-rc-t8j45": Phase="Pending", Reason="", readiness=false. Elapsed: 6.107975723s
Jul  7 16:52:47.564: INFO: Pod "e2e-test-httpd-rc-t8j45": Phase="Pending", Reason="", readiness=false. Elapsed: 8.111225947s
Jul  7 16:52:49.567: INFO: Pod "e2e-test-httpd-rc-t8j45": Phase="Running", Reason="", readiness=true. Elapsed: 10.114822934s
Jul  7 16:52:49.567: INFO: Pod "e2e-test-httpd-rc-t8j45" satisfied condition "running and ready"
Jul  7 16:52:49.567: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-t8j45]
Jul  7 16:52:49.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-4942'
Jul  7 16:52:49.691: INFO: stderr: ""
Jul  7 16:52:49.691: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.106. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.106. Set the 'ServerName' directive globally to suppress this message\n[Tue Jul 07 16:52:48.406149 2020] [mpm_event:notice] [pid 1:tid 140695583140712] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Tue Jul 07 16:52:48.406210 2020] [core:notice] [pid 1:tid 140695583140712] AH00094: Command line: 'httpd -D FOREGROUND'\n"
[AfterEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1530
Jul  7 16:52:49.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-4942'
Jul  7 16:52:49.800: INFO: stderr: ""
Jul  7 16:52:49.800: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:52:49.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4942" for this suite.

• [SLOW TEST:16.830 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1521
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image  [Conformance]","total":278,"completed":242,"skipped":4060,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:52:49.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:52:50.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9223" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":243,"skipped":4064,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:52:50.200: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  7 16:52:50.621: INFO: Creating ReplicaSet my-hostname-basic-d1742119-d376-44de-a48d-8eb67a0f5bbc
Jul  7 16:52:51.059: INFO: Pod name my-hostname-basic-d1742119-d376-44de-a48d-8eb67a0f5bbc: Found 0 pods out of 1
Jul  7 16:52:56.065: INFO: Pod name my-hostname-basic-d1742119-d376-44de-a48d-8eb67a0f5bbc: Found 1 pods out of 1
Jul  7 16:52:56.065: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-d1742119-d376-44de-a48d-8eb67a0f5bbc" is running
Jul  7 16:53:00.471: INFO: Pod "my-hostname-basic-d1742119-d376-44de-a48d-8eb67a0f5bbc-qg9zt" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-07 16:52:52 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-07 16:52:52 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-d1742119-d376-44de-a48d-8eb67a0f5bbc]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-07 16:52:52 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-d1742119-d376-44de-a48d-8eb67a0f5bbc]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-07 16:52:51 +0000 UTC Reason: Message:}])
Jul  7 16:53:00.471: INFO: Trying to dial the pod
Jul  7 16:53:05.482: INFO: Controller my-hostname-basic-d1742119-d376-44de-a48d-8eb67a0f5bbc: Got expected result from replica 1 [my-hostname-basic-d1742119-d376-44de-a48d-8eb67a0f5bbc-qg9zt]: "my-hostname-basic-d1742119-d376-44de-a48d-8eb67a0f5bbc-qg9zt", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:53:05.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-5369" for this suite.

• [SLOW TEST:15.290 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":244,"skipped":4098,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:53:05.490: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-6076
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul  7 16:53:05.623: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jul  7 16:53:36.273: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.108:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6076 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 16:53:36.273: INFO: >>> kubeConfig: /root/.kube/config
I0707 16:53:36.300221       6 log.go:172] (0xc002450a50) (0xc001e0caa0) Create stream
I0707 16:53:36.300255       6 log.go:172] (0xc002450a50) (0xc001e0caa0) Stream added, broadcasting: 1
I0707 16:53:36.303034       6 log.go:172] (0xc002450a50) Reply frame received for 1
I0707 16:53:36.303077       6 log.go:172] (0xc002450a50) (0xc002740320) Create stream
I0707 16:53:36.303103       6 log.go:172] (0xc002450a50) (0xc002740320) Stream added, broadcasting: 3
I0707 16:53:36.304075       6 log.go:172] (0xc002450a50) Reply frame received for 3
I0707 16:53:36.304125       6 log.go:172] (0xc002450a50) (0xc000a82820) Create stream
I0707 16:53:36.304148       6 log.go:172] (0xc002450a50) (0xc000a82820) Stream added, broadcasting: 5
I0707 16:53:36.305011       6 log.go:172] (0xc002450a50) Reply frame received for 5
I0707 16:53:36.360313       6 log.go:172] (0xc002450a50) Data frame received for 3
I0707 16:53:36.360360       6 log.go:172] (0xc002740320) (3) Data frame handling
I0707 16:53:36.360373       6 log.go:172] (0xc002740320) (3) Data frame sent
I0707 16:53:36.360404       6 log.go:172] (0xc002450a50) Data frame received for 5
I0707 16:53:36.360438       6 log.go:172] (0xc000a82820) (5) Data frame handling
I0707 16:53:36.360466       6 log.go:172] (0xc002450a50) Data frame received for 3
I0707 16:53:36.360478       6 log.go:172] (0xc002740320) (3) Data frame handling
I0707 16:53:36.362402       6 log.go:172] (0xc002450a50) Data frame received for 1
I0707 16:53:36.362426       6 log.go:172] (0xc001e0caa0) (1) Data frame handling
I0707 16:53:36.362453       6 log.go:172] (0xc001e0caa0) (1) Data frame sent
I0707 16:53:36.362476       6 log.go:172] (0xc002450a50) (0xc001e0caa0) Stream removed, broadcasting: 1
I0707 16:53:36.362536       6 log.go:172] (0xc002450a50) Go away received
I0707 16:53:36.362573       6 log.go:172] (0xc002450a50) (0xc001e0caa0) Stream removed, broadcasting: 1
I0707 16:53:36.362615       6 log.go:172] (0xc002450a50) (0xc002740320) Stream removed, broadcasting: 3
I0707 16:53:36.362631       6 log.go:172] (0xc002450a50) (0xc000a82820) Stream removed, broadcasting: 5
Jul  7 16:53:36.362: INFO: Found all expected endpoints: [netserver-0]
Jul  7 16:53:36.366: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.107:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6076 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 16:53:36.366: INFO: >>> kubeConfig: /root/.kube/config
I0707 16:53:36.393873       6 log.go:172] (0xc001dee580) (0xc001e0cfa0) Create stream
I0707 16:53:36.393903       6 log.go:172] (0xc001dee580) (0xc001e0cfa0) Stream added, broadcasting: 1
I0707 16:53:36.400071       6 log.go:172] (0xc001dee580) Reply frame received for 1
I0707 16:53:36.400129       6 log.go:172] (0xc001dee580) (0xc001e0d360) Create stream
I0707 16:53:36.400153       6 log.go:172] (0xc001dee580) (0xc001e0d360) Stream added, broadcasting: 3
I0707 16:53:36.401782       6 log.go:172] (0xc001dee580) Reply frame received for 3
I0707 16:53:36.401823       6 log.go:172] (0xc001dee580) (0xc0027403c0) Create stream
I0707 16:53:36.401843       6 log.go:172] (0xc001dee580) (0xc0027403c0) Stream added, broadcasting: 5
I0707 16:53:36.402702       6 log.go:172] (0xc001dee580) Reply frame received for 5
I0707 16:53:36.465551       6 log.go:172] (0xc001dee580) Data frame received for 3
I0707 16:53:36.465579       6 log.go:172] (0xc001e0d360) (3) Data frame handling
I0707 16:53:36.465589       6 log.go:172] (0xc001e0d360) (3) Data frame sent
I0707 16:53:36.465596       6 log.go:172] (0xc001dee580) Data frame received for 3
I0707 16:53:36.465602       6 log.go:172] (0xc001e0d360) (3) Data frame handling
I0707 16:53:36.465664       6 log.go:172] (0xc001dee580) Data frame received for 5
I0707 16:53:36.465675       6 log.go:172] (0xc0027403c0) (5) Data frame handling
I0707 16:53:36.467544       6 log.go:172] (0xc001dee580) Data frame received for 1
I0707 16:53:36.467568       6 log.go:172] (0xc001e0cfa0) (1) Data frame handling
I0707 16:53:36.467575       6 log.go:172] (0xc001e0cfa0) (1) Data frame sent
I0707 16:53:36.467583       6 log.go:172] (0xc001dee580) (0xc001e0cfa0) Stream removed, broadcasting: 1
I0707 16:53:36.467591       6 log.go:172] (0xc001dee580) Go away received
I0707 16:53:36.467794       6 log.go:172] (0xc001dee580) (0xc001e0cfa0) Stream removed, broadcasting: 1
I0707 16:53:36.467830       6 log.go:172] (0xc001dee580) (0xc001e0d360) Stream removed, broadcasting: 3
I0707 16:53:36.467843       6 log.go:172] (0xc001dee580) (0xc0027403c0) Stream removed, broadcasting: 5
Jul  7 16:53:36.467: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:53:36.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6076" for this suite.

• [SLOW TEST:30.986 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":245,"skipped":4102,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:53:36.477: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:53:45.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2515" for this suite.

• [SLOW TEST:9.441 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":4153,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:53:45.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul  7 16:53:46.464: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dbe76df3-edba-49c7-8458-4035912bfbe1" in namespace "projected-8771" to be "success or failure"
Jul  7 16:53:46.841: INFO: Pod "downwardapi-volume-dbe76df3-edba-49c7-8458-4035912bfbe1": Phase="Pending", Reason="", readiness=false. Elapsed: 377.091596ms
Jul  7 16:53:48.844: INFO: Pod "downwardapi-volume-dbe76df3-edba-49c7-8458-4035912bfbe1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.380378547s
Jul  7 16:53:51.015: INFO: Pod "downwardapi-volume-dbe76df3-edba-49c7-8458-4035912bfbe1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.551440388s
Jul  7 16:53:53.262: INFO: Pod "downwardapi-volume-dbe76df3-edba-49c7-8458-4035912bfbe1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.79836052s
Jul  7 16:53:55.501: INFO: Pod "downwardapi-volume-dbe76df3-edba-49c7-8458-4035912bfbe1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.037696869s
STEP: Saw pod success
Jul  7 16:53:55.502: INFO: Pod "downwardapi-volume-dbe76df3-edba-49c7-8458-4035912bfbe1" satisfied condition "success or failure"
Jul  7 16:53:55.504: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-dbe76df3-edba-49c7-8458-4035912bfbe1 container client-container: 
STEP: delete the pod
Jul  7 16:53:55.772: INFO: Waiting for pod downwardapi-volume-dbe76df3-edba-49c7-8458-4035912bfbe1 to disappear
Jul  7 16:53:56.239: INFO: Pod downwardapi-volume-dbe76df3-edba-49c7-8458-4035912bfbe1 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:53:56.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8771" for this suite.

• [SLOW TEST:10.759 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":247,"skipped":4154,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:53:56.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jul  7 16:54:07.820: INFO: Successfully updated pod "pod-update-7ffbb957-8139-4ec7-bcd2-13ebe958fb7f"
STEP: verifying the updated pod is in kubernetes
Jul  7 16:54:07.886: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:54:07.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1719" for this suite.

• [SLOW TEST:11.217 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":248,"skipped":4157,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:54:07.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jul  7 16:54:08.849: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-2376 /api/v1/namespaces/watch-2376/configmaps/e2e-watch-test-label-changed adc7d9ad-76e1-4405-8f13-7ccb2e056ec5 951182 0 2020-07-07 16:54:08 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul  7 16:54:08.849: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-2376 /api/v1/namespaces/watch-2376/configmaps/e2e-watch-test-label-changed adc7d9ad-76e1-4405-8f13-7ccb2e056ec5 951183 0 2020-07-07 16:54:08 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jul  7 16:54:08.849: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-2376 /api/v1/namespaces/watch-2376/configmaps/e2e-watch-test-label-changed adc7d9ad-76e1-4405-8f13-7ccb2e056ec5 951184 0 2020-07-07 16:54:08 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jul  7 16:54:20.108: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-2376 /api/v1/namespaces/watch-2376/configmaps/e2e-watch-test-label-changed adc7d9ad-76e1-4405-8f13-7ccb2e056ec5 951220 0 2020-07-07 16:54:08 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul  7 16:54:20.108: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-2376 /api/v1/namespaces/watch-2376/configmaps/e2e-watch-test-label-changed adc7d9ad-76e1-4405-8f13-7ccb2e056ec5 951223 0 2020-07-07 16:54:08 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jul  7 16:54:20.108: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-2376 /api/v1/namespaces/watch-2376/configmaps/e2e-watch-test-label-changed adc7d9ad-76e1-4405-8f13-7ccb2e056ec5 951224 0 2020-07-07 16:54:08 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:54:20.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2376" for this suite.

• [SLOW TEST:13.536 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":249,"skipped":4158,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:54:21.432: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-9894
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-9894
STEP: Creating statefulset with conflicting port in namespace statefulset-9894
STEP: Waiting until pod test-pod will start running in namespace statefulset-9894
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9894
Jul  7 16:54:38.969: INFO: Observed stateful pod in namespace: statefulset-9894, name: ss-0, uid: 279e5555-e0ff-4dfd-8bdd-ecefb71e7b59, status phase: Pending. Waiting for statefulset controller to delete.
Jul  7 16:54:40.193: INFO: Observed stateful pod in namespace: statefulset-9894, name: ss-0, uid: 279e5555-e0ff-4dfd-8bdd-ecefb71e7b59, status phase: Failed. Waiting for statefulset controller to delete.
Jul  7 16:54:40.273: INFO: Observed stateful pod in namespace: statefulset-9894, name: ss-0, uid: 279e5555-e0ff-4dfd-8bdd-ecefb71e7b59, status phase: Failed. Waiting for statefulset controller to delete.
Jul  7 16:54:40.663: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-9894
STEP: Removing pod with conflicting port in namespace statefulset-9894
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-9894 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jul  7 16:54:51.496: INFO: Deleting all statefulset in ns statefulset-9894
Jul  7 16:54:51.498: INFO: Scaling statefulset ss to 0
Jul  7 16:55:11.931: INFO: Waiting for statefulset status.replicas updated to 0
Jul  7 16:55:12.256: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:55:12.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9894" for this suite.

• [SLOW TEST:51.208 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":250,"skipped":4165,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:55:12.640: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override arguments
Jul  7 16:55:14.161: INFO: Waiting up to 5m0s for pod "client-containers-4e7a0f6d-e0d1-4a88-aaf7-ede5a926ba49" in namespace "containers-2722" to be "success or failure"
Jul  7 16:55:14.648: INFO: Pod "client-containers-4e7a0f6d-e0d1-4a88-aaf7-ede5a926ba49": Phase="Pending", Reason="", readiness=false. Elapsed: 486.977618ms
Jul  7 16:55:16.928: INFO: Pod "client-containers-4e7a0f6d-e0d1-4a88-aaf7-ede5a926ba49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.767035186s
Jul  7 16:55:19.221: INFO: Pod "client-containers-4e7a0f6d-e0d1-4a88-aaf7-ede5a926ba49": Phase="Pending", Reason="", readiness=false. Elapsed: 5.060235988s
Jul  7 16:55:21.378: INFO: Pod "client-containers-4e7a0f6d-e0d1-4a88-aaf7-ede5a926ba49": Phase="Running", Reason="", readiness=true. Elapsed: 7.216938733s
Jul  7 16:55:23.390: INFO: Pod "client-containers-4e7a0f6d-e0d1-4a88-aaf7-ede5a926ba49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.229075112s
STEP: Saw pod success
Jul  7 16:55:23.390: INFO: Pod "client-containers-4e7a0f6d-e0d1-4a88-aaf7-ede5a926ba49" satisfied condition "success or failure"
Jul  7 16:55:23.392: INFO: Trying to get logs from node jerma-worker2 pod client-containers-4e7a0f6d-e0d1-4a88-aaf7-ede5a926ba49 container test-container: 
STEP: delete the pod
Jul  7 16:55:23.418: INFO: Waiting for pod client-containers-4e7a0f6d-e0d1-4a88-aaf7-ede5a926ba49 to disappear
Jul  7 16:55:23.463: INFO: Pod client-containers-4e7a0f6d-e0d1-4a88-aaf7-ede5a926ba49 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:55:23.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2722" for this suite.

• [SLOW TEST:10.832 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":251,"skipped":4190,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:55:23.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-618733fd-37af-4548-ac7f-e6d40b45f2fc
STEP: Creating a pod to test consume secrets
Jul  7 16:55:23.815: INFO: Waiting up to 5m0s for pod "pod-secrets-f80a1f15-853d-4d1f-af7d-4d2f8c2f349a" in namespace "secrets-5282" to be "success or failure"
Jul  7 16:55:23.910: INFO: Pod "pod-secrets-f80a1f15-853d-4d1f-af7d-4d2f8c2f349a": Phase="Pending", Reason="", readiness=false. Elapsed: 94.525855ms
Jul  7 16:55:25.914: INFO: Pod "pod-secrets-f80a1f15-853d-4d1f-af7d-4d2f8c2f349a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098751199s
Jul  7 16:55:27.918: INFO: Pod "pod-secrets-f80a1f15-853d-4d1f-af7d-4d2f8c2f349a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102863066s
Jul  7 16:55:29.931: INFO: Pod "pod-secrets-f80a1f15-853d-4d1f-af7d-4d2f8c2f349a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.115266851s
STEP: Saw pod success
Jul  7 16:55:29.931: INFO: Pod "pod-secrets-f80a1f15-853d-4d1f-af7d-4d2f8c2f349a" satisfied condition "success or failure"
Jul  7 16:55:29.932: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-f80a1f15-853d-4d1f-af7d-4d2f8c2f349a container secret-volume-test: 
STEP: delete the pod
Jul  7 16:55:30.008: INFO: Waiting for pod pod-secrets-f80a1f15-853d-4d1f-af7d-4d2f8c2f349a to disappear
Jul  7 16:55:30.020: INFO: Pod pod-secrets-f80a1f15-853d-4d1f-af7d-4d2f8c2f349a no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:55:30.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5282" for this suite.

• [SLOW TEST:6.556 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":252,"skipped":4199,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:55:30.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  7 16:55:30.725: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  7 16:55:32.875: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737730, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737730, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737730, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737730, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  7 16:55:35.947: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:55:37.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9577" for this suite.
STEP: Destroying namespace "webhook-9577-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.625 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":253,"skipped":4205,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:55:37.654: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Jul  7 16:55:38.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3539'
Jul  7 16:55:39.145: INFO: stderr: ""
Jul  7 16:55:39.145: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul  7 16:55:39.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3539'
Jul  7 16:55:39.542: INFO: stderr: ""
Jul  7 16:55:39.542: INFO: stdout: "update-demo-nautilus-fvm7p "
STEP: Replicas for name=update-demo: expected=2 actual=1
Jul  7 16:55:44.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3539'
Jul  7 16:55:44.644: INFO: stderr: ""
Jul  7 16:55:44.644: INFO: stdout: "update-demo-nautilus-fvm7p update-demo-nautilus-tzn2n "
Jul  7 16:55:44.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fvm7p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3539'
Jul  7 16:55:44.771: INFO: stderr: ""
Jul  7 16:55:44.772: INFO: stdout: ""
Jul  7 16:55:44.772: INFO: update-demo-nautilus-fvm7p is created but not running
Jul  7 16:55:49.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3539'
Jul  7 16:55:49.931: INFO: stderr: ""
Jul  7 16:55:49.931: INFO: stdout: "update-demo-nautilus-fvm7p update-demo-nautilus-tzn2n "
Jul  7 16:55:49.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fvm7p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3539'
Jul  7 16:55:50.079: INFO: stderr: ""
Jul  7 16:55:50.079: INFO: stdout: "true"
Jul  7 16:55:50.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fvm7p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3539'
Jul  7 16:55:51.123: INFO: stderr: ""
Jul  7 16:55:51.123: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  7 16:55:51.123: INFO: validating pod update-demo-nautilus-fvm7p
Jul  7 16:55:51.319: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  7 16:55:51.319: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  7 16:55:51.319: INFO: update-demo-nautilus-fvm7p is verified up and running
Jul  7 16:55:51.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tzn2n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3539'
Jul  7 16:55:51.636: INFO: stderr: ""
Jul  7 16:55:51.636: INFO: stdout: "true"
Jul  7 16:55:51.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tzn2n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3539'
Jul  7 16:55:51.862: INFO: stderr: ""
Jul  7 16:55:51.862: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  7 16:55:51.862: INFO: validating pod update-demo-nautilus-tzn2n
Jul  7 16:55:51.942: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  7 16:55:51.942: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  7 16:55:51.942: INFO: update-demo-nautilus-tzn2n is verified up and running
STEP: scaling down the replication controller
Jul  7 16:55:51.946: INFO: scanned /root for discovery docs: 
Jul  7 16:55:51.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-3539'
Jul  7 16:55:53.304: INFO: stderr: ""
Jul  7 16:55:53.304: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul  7 16:55:53.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3539'
Jul  7 16:55:53.437: INFO: stderr: ""
Jul  7 16:55:53.437: INFO: stdout: "update-demo-nautilus-fvm7p update-demo-nautilus-tzn2n "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jul  7 16:55:58.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3539'
Jul  7 16:55:58.537: INFO: stderr: ""
Jul  7 16:55:58.537: INFO: stdout: "update-demo-nautilus-fvm7p update-demo-nautilus-tzn2n "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jul  7 16:56:03.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3539'
Jul  7 16:56:03.642: INFO: stderr: ""
Jul  7 16:56:03.642: INFO: stdout: "update-demo-nautilus-fvm7p update-demo-nautilus-tzn2n "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jul  7 16:56:08.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3539'
Jul  7 16:56:08.739: INFO: stderr: ""
Jul  7 16:56:08.739: INFO: stdout: "update-demo-nautilus-fvm7p "
Jul  7 16:56:08.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fvm7p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3539'
Jul  7 16:56:08.825: INFO: stderr: ""
Jul  7 16:56:08.825: INFO: stdout: "true"
Jul  7 16:56:08.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fvm7p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3539'
Jul  7 16:56:08.913: INFO: stderr: ""
Jul  7 16:56:08.913: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  7 16:56:08.913: INFO: validating pod update-demo-nautilus-fvm7p
Jul  7 16:56:08.916: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  7 16:56:08.916: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  7 16:56:08.916: INFO: update-demo-nautilus-fvm7p is verified up and running
STEP: scaling up the replication controller
Jul  7 16:56:08.918: INFO: scanned /root for discovery docs: 
Jul  7 16:56:08.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-3539'
Jul  7 16:56:10.100: INFO: stderr: ""
Jul  7 16:56:10.100: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul  7 16:56:10.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3539'
Jul  7 16:56:10.784: INFO: stderr: ""
Jul  7 16:56:10.784: INFO: stdout: "update-demo-nautilus-fvm7p update-demo-nautilus-w8dqj "
Jul  7 16:56:10.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fvm7p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3539'
Jul  7 16:56:10.903: INFO: stderr: ""
Jul  7 16:56:10.903: INFO: stdout: "true"
Jul  7 16:56:10.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fvm7p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3539'
Jul  7 16:56:10.990: INFO: stderr: ""
Jul  7 16:56:10.990: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  7 16:56:10.990: INFO: validating pod update-demo-nautilus-fvm7p
Jul  7 16:56:10.993: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  7 16:56:10.993: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  7 16:56:10.993: INFO: update-demo-nautilus-fvm7p is verified up and running
Jul  7 16:56:10.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w8dqj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3539'
Jul  7 16:56:11.113: INFO: stderr: ""
Jul  7 16:56:11.114: INFO: stdout: ""
Jul  7 16:56:11.114: INFO: update-demo-nautilus-w8dqj is created but not running
Jul  7 16:56:16.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3539'
Jul  7 16:56:16.212: INFO: stderr: ""
Jul  7 16:56:16.212: INFO: stdout: "update-demo-nautilus-fvm7p update-demo-nautilus-w8dqj "
Jul  7 16:56:16.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fvm7p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3539'
Jul  7 16:56:16.336: INFO: stderr: ""
Jul  7 16:56:16.336: INFO: stdout: "true"
Jul  7 16:56:16.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fvm7p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3539'
Jul  7 16:56:16.429: INFO: stderr: ""
Jul  7 16:56:16.429: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  7 16:56:16.429: INFO: validating pod update-demo-nautilus-fvm7p
Jul  7 16:56:16.433: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  7 16:56:16.433: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  7 16:56:16.433: INFO: update-demo-nautilus-fvm7p is verified up and running
Jul  7 16:56:16.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w8dqj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3539'
Jul  7 16:56:16.518: INFO: stderr: ""
Jul  7 16:56:16.518: INFO: stdout: "true"
Jul  7 16:56:16.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w8dqj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3539'
Jul  7 16:56:16.614: INFO: stderr: ""
Jul  7 16:56:16.614: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  7 16:56:16.614: INFO: validating pod update-demo-nautilus-w8dqj
Jul  7 16:56:16.619: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  7 16:56:16.619: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  7 16:56:16.619: INFO: update-demo-nautilus-w8dqj is verified up and running
STEP: using delete to clean up resources
Jul  7 16:56:16.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3539'
Jul  7 16:56:16.740: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  7 16:56:16.740: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jul  7 16:56:16.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3539'
Jul  7 16:56:16.854: INFO: stderr: "No resources found in kubectl-3539 namespace.\n"
Jul  7 16:56:16.854: INFO: stdout: ""
Jul  7 16:56:16.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3539 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul  7 16:56:16.951: INFO: stderr: ""
Jul  7 16:56:16.951: INFO: stdout: "update-demo-nautilus-fvm7p\nupdate-demo-nautilus-w8dqj\n"
Jul  7 16:56:17.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3539'
Jul  7 16:56:17.556: INFO: stderr: "No resources found in kubectl-3539 namespace.\n"
Jul  7 16:56:17.556: INFO: stdout: ""
Jul  7 16:56:17.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3539 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul  7 16:56:17.645: INFO: stderr: ""
Jul  7 16:56:17.645: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:56:17.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3539" for this suite.

• [SLOW TEST:39.997 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":278,"completed":254,"skipped":4210,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:56:17.652: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  7 16:56:19.312: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  7 16:56:21.537: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737779, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737779, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737779, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737779, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 16:56:23.540: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737779, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737779, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737779, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729737779, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  7 16:56:26.996: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Jul  7 16:56:27.076: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 16:56:27.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2622" for this suite.
STEP: Destroying namespace "webhook-2622-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.736 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":255,"skipped":4227,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 16:56:27.388: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jul  7 16:56:27.447: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul  7 16:56:27.468: INFO: Waiting for terminating namespaces to be deleted...
Jul  7 16:56:27.470: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Jul  7 16:56:27.486: INFO: kindnet-gnxwn from kube-system started at 2020-07-04 07:51:00 +0000 UTC (1 container statuses recorded)
Jul  7 16:56:27.486: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul  7 16:56:27.486: INFO: kube-proxy-8sp85 from kube-system started at 2020-07-04 07:51:00 +0000 UTC (1 container statuses recorded)
Jul  7 16:56:27.486: INFO: 	Container kube-proxy ready: true, restart count 0
Jul  7 16:56:27.486: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Jul  7 16:56:27.561: INFO: kindnet-qg8qr from kube-system started at 2020-07-04 07:51:01 +0000 UTC (1 container statuses recorded)
Jul  7 16:56:27.561: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul  7 16:56:27.561: INFO: kube-proxy-b2ncl from kube-system started at 2020-07-04 07:51:01 +0000 UTC (1 container statuses recorded)
Jul  7 16:56:27.561: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-1daaf7e0-4278-4088-9912-362a565ee03c 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-1daaf7e0-4278-4088-9912-362a565ee03c off the node jerma-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-1daaf7e0-4278-4088-9912-362a565ee03c
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 17:01:38.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9457" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:311.085 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":256,"skipped":4250,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 17:01:38.474: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Jul  7 17:01:38.535: INFO: PodSpec: initContainers in spec.initContainers
Jul  7 17:02:34.595: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-b52785e8-b2e8-473d-82c0-b10b61e95264", GenerateName:"", Namespace:"init-container-4029", SelfLink:"/api/v1/namespaces/init-container-4029/pods/pod-init-b52785e8-b2e8-473d-82c0-b10b61e95264", UID:"e3401a48-5cac-4e61-94c2-380f5320f431", ResourceVersion:"953046", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63729738098, loc:(*time.Location)(0x78f7140)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"535511626"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-vd25t", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc004a73600), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vd25t", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vd25t", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vd25t", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003e2a438), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002901560), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003e2a4c0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003e2a4e0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003e2a4e8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc003e2a4ec), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729738099, loc:(*time.Location)(0x78f7140)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729738099, loc:(*time.Location)(0x78f7140)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729738099, loc:(*time.Location)(0x78f7140)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729738098, loc:(*time.Location)(0x78f7140)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.10", PodIP:"10.244.1.119", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.119"}}, StartTime:(*v1.Time)(0xc003ca7140), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0026403f0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002640460)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://3100825c21444de09510e76bc3ab8970ea64edd210047895fdd3225b851b2c71", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003ca7180), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003ca7160), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc003e2a56f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 17:02:34.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4029" for this suite.

• [SLOW TEST:57.466 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":257,"skipped":4266,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 17:02:35.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  7 17:02:36.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Jul  7 17:02:38.047: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-07T17:02:37Z generation:1 name:name1 resourceVersion:953068 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:ad0824a5-975d-4d60-95de-9d15ed3b5dda] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Jul  7 17:02:48.440: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-07T17:02:48Z generation:1 name:name2 resourceVersion:953100 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:74243080-46ca-42ef-95ba-bf14be51784c] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Jul  7 17:02:58.504: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-07T17:02:37Z generation:2 name:name1 resourceVersion:953125 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:ad0824a5-975d-4d60-95de-9d15ed3b5dda] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Jul  7 17:03:08.617: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-07T17:02:48Z generation:2 name:name2 resourceVersion:953153 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:74243080-46ca-42ef-95ba-bf14be51784c] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Jul  7 17:03:18.963: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-07T17:02:37Z generation:2 name:name1 resourceVersion:953178 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:ad0824a5-975d-4d60-95de-9d15ed3b5dda] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Jul  7 17:03:29.146: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-07T17:02:48Z generation:2 name:name2 resourceVersion:953207 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:74243080-46ca-42ef-95ba-bf14be51784c] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 17:03:39.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-1001" for this suite.

• [SLOW TEST:63.809 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41
    watch on custom resource definition objects [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":258,"skipped":4269,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 17:03:39.750: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-3095
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul  7 17:03:39.998: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jul  7 17:04:06.205: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.114:8080/dial?request=hostname&protocol=http&host=10.244.1.120&port=8080&tries=1'] Namespace:pod-network-test-3095 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 17:04:06.206: INFO: >>> kubeConfig: /root/.kube/config
I0707 17:04:06.239014       6 log.go:172] (0xc002866420) (0xc001a36d20) Create stream
I0707 17:04:06.239050       6 log.go:172] (0xc002866420) (0xc001a36d20) Stream added, broadcasting: 1
I0707 17:04:06.242806       6 log.go:172] (0xc002866420) Reply frame received for 1
I0707 17:04:06.242877       6 log.go:172] (0xc002866420) (0xc002740000) Create stream
I0707 17:04:06.242897       6 log.go:172] (0xc002866420) (0xc002740000) Stream added, broadcasting: 3
I0707 17:04:06.243673       6 log.go:172] (0xc002866420) Reply frame received for 3
I0707 17:04:06.243713       6 log.go:172] (0xc002866420) (0xc001222aa0) Create stream
I0707 17:04:06.243741       6 log.go:172] (0xc002866420) (0xc001222aa0) Stream added, broadcasting: 5
I0707 17:04:06.244454       6 log.go:172] (0xc002866420) Reply frame received for 5
I0707 17:04:06.305884       6 log.go:172] (0xc002866420) Data frame received for 3
I0707 17:04:06.305937       6 log.go:172] (0xc002740000) (3) Data frame handling
I0707 17:04:06.305979       6 log.go:172] (0xc002740000) (3) Data frame sent
I0707 17:04:06.306223       6 log.go:172] (0xc002866420) Data frame received for 5
I0707 17:04:06.306307       6 log.go:172] (0xc002866420) Data frame received for 3
I0707 17:04:06.306353       6 log.go:172] (0xc002740000) (3) Data frame handling
I0707 17:04:06.306388       6 log.go:172] (0xc001222aa0) (5) Data frame handling
I0707 17:04:06.308407       6 log.go:172] (0xc002866420) Data frame received for 1
I0707 17:04:06.308420       6 log.go:172] (0xc001a36d20) (1) Data frame handling
I0707 17:04:06.308437       6 log.go:172] (0xc001a36d20) (1) Data frame sent
I0707 17:04:06.308448       6 log.go:172] (0xc002866420) (0xc001a36d20) Stream removed, broadcasting: 1
I0707 17:04:06.308557       6 log.go:172] (0xc002866420) Go away received
I0707 17:04:06.308676       6 log.go:172] (0xc002866420) (0xc001a36d20) Stream removed, broadcasting: 1
I0707 17:04:06.308735       6 log.go:172] (0xc002866420) (0xc002740000) Stream removed, broadcasting: 3
I0707 17:04:06.308754       6 log.go:172] (0xc002866420) (0xc001222aa0) Stream removed, broadcasting: 5
Jul  7 17:04:06.308: INFO: Waiting for responses: map[]
Jul  7 17:04:06.312: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.114:8080/dial?request=hostname&protocol=http&host=10.244.2.113&port=8080&tries=1'] Namespace:pod-network-test-3095 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 17:04:06.312: INFO: >>> kubeConfig: /root/.kube/config
I0707 17:04:06.341865       6 log.go:172] (0xc0023984d0) (0xc00160e1e0) Create stream
I0707 17:04:06.341904       6 log.go:172] (0xc0023984d0) (0xc00160e1e0) Stream added, broadcasting: 1
I0707 17:04:06.344207       6 log.go:172] (0xc0023984d0) Reply frame received for 1
I0707 17:04:06.344252       6 log.go:172] (0xc0023984d0) (0xc001222fa0) Create stream
I0707 17:04:06.344268       6 log.go:172] (0xc0023984d0) (0xc001222fa0) Stream added, broadcasting: 3
I0707 17:04:06.345617       6 log.go:172] (0xc0023984d0) Reply frame received for 3
I0707 17:04:06.345655       6 log.go:172] (0xc0023984d0) (0xc002740140) Create stream
I0707 17:04:06.345671       6 log.go:172] (0xc0023984d0) (0xc002740140) Stream added, broadcasting: 5
I0707 17:04:06.346653       6 log.go:172] (0xc0023984d0) Reply frame received for 5
I0707 17:04:06.416374       6 log.go:172] (0xc0023984d0) Data frame received for 3
I0707 17:04:06.416420       6 log.go:172] (0xc001222fa0) (3) Data frame handling
I0707 17:04:06.416449       6 log.go:172] (0xc001222fa0) (3) Data frame sent
I0707 17:04:06.417352       6 log.go:172] (0xc0023984d0) Data frame received for 3
I0707 17:04:06.417394       6 log.go:172] (0xc001222fa0) (3) Data frame handling
I0707 17:04:06.417422       6 log.go:172] (0xc0023984d0) Data frame received for 5
I0707 17:04:06.417438       6 log.go:172] (0xc002740140) (5) Data frame handling
I0707 17:04:06.418839       6 log.go:172] (0xc0023984d0) Data frame received for 1
I0707 17:04:06.418865       6 log.go:172] (0xc00160e1e0) (1) Data frame handling
I0707 17:04:06.418923       6 log.go:172] (0xc00160e1e0) (1) Data frame sent
I0707 17:04:06.418951       6 log.go:172] (0xc0023984d0) (0xc00160e1e0) Stream removed, broadcasting: 1
I0707 17:04:06.418974       6 log.go:172] (0xc0023984d0) Go away received
I0707 17:04:06.419101       6 log.go:172] (0xc0023984d0) (0xc00160e1e0) Stream removed, broadcasting: 1
I0707 17:04:06.419119       6 log.go:172] (0xc0023984d0) (0xc001222fa0) Stream removed, broadcasting: 3
I0707 17:04:06.419127       6 log.go:172] (0xc0023984d0) (0xc002740140) Stream removed, broadcasting: 5
Jul  7 17:04:06.419: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 17:04:06.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-3095" for this suite.

• [SLOW TEST:26.676 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":259,"skipped":4286,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 17:04:06.427: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  7 17:04:07.433: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  7 17:04:09.552: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729738247, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729738247, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729738247, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729738247, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 17:04:11.730: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729738247, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729738247, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729738247, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729738247, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  7 17:04:15.128: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 17:04:15.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7740" for this suite.
STEP: Destroying namespace "webhook-7740-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.960 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":260,"skipped":4294,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 17:04:18.390: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul  7 17:04:19.496: INFO: Waiting up to 5m0s for pod "downwardapi-volume-86972a3d-f53b-4817-b088-d923f65a9434" in namespace "downward-api-380" to be "success or failure"
Jul  7 17:04:20.020: INFO: Pod "downwardapi-volume-86972a3d-f53b-4817-b088-d923f65a9434": Phase="Pending", Reason="", readiness=false. Elapsed: 524.127762ms
Jul  7 17:04:22.182: INFO: Pod "downwardapi-volume-86972a3d-f53b-4817-b088-d923f65a9434": Phase="Pending", Reason="", readiness=false. Elapsed: 2.685873976s
Jul  7 17:04:24.523: INFO: Pod "downwardapi-volume-86972a3d-f53b-4817-b088-d923f65a9434": Phase="Pending", Reason="", readiness=false. Elapsed: 5.027052799s
Jul  7 17:04:27.202: INFO: Pod "downwardapi-volume-86972a3d-f53b-4817-b088-d923f65a9434": Phase="Running", Reason="", readiness=true. Elapsed: 7.706041294s
Jul  7 17:04:29.206: INFO: Pod "downwardapi-volume-86972a3d-f53b-4817-b088-d923f65a9434": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.710305957s
STEP: Saw pod success
Jul  7 17:04:29.206: INFO: Pod "downwardapi-volume-86972a3d-f53b-4817-b088-d923f65a9434" satisfied condition "success or failure"
Jul  7 17:04:29.209: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-86972a3d-f53b-4817-b088-d923f65a9434 container client-container: 
STEP: delete the pod
Jul  7 17:04:29.414: INFO: Waiting for pod downwardapi-volume-86972a3d-f53b-4817-b088-d923f65a9434 to disappear
Jul  7 17:04:29.448: INFO: Pod downwardapi-volume-86972a3d-f53b-4817-b088-d923f65a9434 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 17:04:29.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-380" for this suite.

• [SLOW TEST:11.066 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":261,"skipped":4347,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 17:04:29.458: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-c690569d-722b-4c4d-9c2f-549ebc0c12c2
STEP: Creating a pod to test consume configMaps
Jul  7 17:04:29.795: INFO: Waiting up to 5m0s for pod "pod-configmaps-c2126b5f-ab81-4d6a-af77-e829c2f36d0f" in namespace "configmap-9377" to be "success or failure"
Jul  7 17:04:29.977: INFO: Pod "pod-configmaps-c2126b5f-ab81-4d6a-af77-e829c2f36d0f": Phase="Pending", Reason="", readiness=false. Elapsed: 182.064085ms
Jul  7 17:04:33.134: INFO: Pod "pod-configmaps-c2126b5f-ab81-4d6a-af77-e829c2f36d0f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.339342563s
Jul  7 17:04:35.595: INFO: Pod "pod-configmaps-c2126b5f-ab81-4d6a-af77-e829c2f36d0f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.800304056s
Jul  7 17:04:37.599: INFO: Pod "pod-configmaps-c2126b5f-ab81-4d6a-af77-e829c2f36d0f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.804293987s
STEP: Saw pod success
Jul  7 17:04:37.599: INFO: Pod "pod-configmaps-c2126b5f-ab81-4d6a-af77-e829c2f36d0f" satisfied condition "success or failure"
Jul  7 17:04:37.602: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-c2126b5f-ab81-4d6a-af77-e829c2f36d0f container configmap-volume-test: 
STEP: delete the pod
Jul  7 17:04:37.668: INFO: Waiting for pod pod-configmaps-c2126b5f-ab81-4d6a-af77-e829c2f36d0f to disappear
Jul  7 17:04:37.716: INFO: Pod pod-configmaps-c2126b5f-ab81-4d6a-af77-e829c2f36d0f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 17:04:37.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9377" for this suite.

• [SLOW TEST:8.265 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4412,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 17:04:37.723: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 17:04:57.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6370" for this suite.

• [SLOW TEST:19.667 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":263,"skipped":4452,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 17:04:57.390: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  7 17:04:58.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-224
I0707 17:04:58.176406       6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-224, replica count: 1
I0707 17:04:59.226826       6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0707 17:05:00.227034       6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0707 17:05:01.227233       6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0707 17:05:02.227449       6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0707 17:05:03.227675       6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0707 17:05:04.227915       6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0707 17:05:05.228113       6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul  7 17:05:05.558: INFO: Created: latency-svc-dxdv4
Jul  7 17:05:05.640: INFO: Got endpoints: latency-svc-dxdv4 [312.474302ms]
Jul  7 17:05:05.777: INFO: Created: latency-svc-9plvk
Jul  7 17:05:06.176: INFO: Got endpoints: latency-svc-9plvk [535.231954ms]
Jul  7 17:05:06.222: INFO: Created: latency-svc-v24tz
Jul  7 17:05:06.252: INFO: Got endpoints: latency-svc-v24tz [611.944226ms]
Jul  7 17:05:06.368: INFO: Created: latency-svc-bwlq5
Jul  7 17:05:06.499: INFO: Got endpoints: latency-svc-bwlq5 [858.042578ms]
Jul  7 17:05:06.942: INFO: Created: latency-svc-hfkmh
Jul  7 17:05:07.260: INFO: Got endpoints: latency-svc-hfkmh [1.619065604s]
Jul  7 17:05:07.472: INFO: Created: latency-svc-rbzwd
Jul  7 17:05:07.607: INFO: Got endpoints: latency-svc-rbzwd [1.966399546s]
Jul  7 17:05:08.111: INFO: Created: latency-svc-qmbpg
Jul  7 17:05:08.314: INFO: Got endpoints: latency-svc-qmbpg [2.673025569s]
Jul  7 17:05:08.376: INFO: Created: latency-svc-pzf5d
Jul  7 17:05:08.481: INFO: Got endpoints: latency-svc-pzf5d [2.840264468s]
Jul  7 17:05:08.562: INFO: Created: latency-svc-zk56v
Jul  7 17:05:08.600: INFO: Got endpoints: latency-svc-zk56v [2.959179842s]
Jul  7 17:05:08.636: INFO: Created: latency-svc-bsqm7
Jul  7 17:05:08.649: INFO: Got endpoints: latency-svc-bsqm7 [3.008667041s]
Jul  7 17:05:08.670: INFO: Created: latency-svc-p664f
Jul  7 17:05:08.686: INFO: Got endpoints: latency-svc-p664f [3.045284566s]
Jul  7 17:05:08.769: INFO: Created: latency-svc-d9b6v
Jul  7 17:05:08.797: INFO: Created: latency-svc-b6zww
Jul  7 17:05:08.797: INFO: Got endpoints: latency-svc-d9b6v [3.156495303s]
Jul  7 17:05:08.857: INFO: Got endpoints: latency-svc-b6zww [3.216038266s]
Jul  7 17:05:08.936: INFO: Created: latency-svc-7hnvc
Jul  7 17:05:08.951: INFO: Got endpoints: latency-svc-7hnvc [3.310444639s]
Jul  7 17:05:08.989: INFO: Created: latency-svc-njdzn
Jul  7 17:05:09.024: INFO: Got endpoints: latency-svc-njdzn [3.38270222s]
Jul  7 17:05:09.086: INFO: Created: latency-svc-n7rpg
Jul  7 17:05:09.108: INFO: Got endpoints: latency-svc-n7rpg [3.467304351s]
Jul  7 17:05:09.151: INFO: Created: latency-svc-pl969
Jul  7 17:05:09.169: INFO: Got endpoints: latency-svc-pl969 [2.993425663s]
Jul  7 17:05:09.223: INFO: Created: latency-svc-96rrj
Jul  7 17:05:09.241: INFO: Got endpoints: latency-svc-96rrj [2.988616996s]
Jul  7 17:05:09.295: INFO: Created: latency-svc-d77kc
Jul  7 17:05:09.313: INFO: Got endpoints: latency-svc-d77kc [2.81452191s]
Jul  7 17:05:09.385: INFO: Created: latency-svc-7gnsw
Jul  7 17:05:09.398: INFO: Got endpoints: latency-svc-7gnsw [2.138416499s]
Jul  7 17:05:09.446: INFO: Created: latency-svc-wgf7b
Jul  7 17:05:09.464: INFO: Got endpoints: latency-svc-wgf7b [1.856336167s]
Jul  7 17:05:09.523: INFO: Created: latency-svc-rxhkh
Jul  7 17:05:09.536: INFO: Got endpoints: latency-svc-rxhkh [1.221979774s]
Jul  7 17:05:09.583: INFO: Created: latency-svc-hmbp8
Jul  7 17:05:09.590: INFO: Got endpoints: latency-svc-hmbp8 [1.10941464s]
Jul  7 17:05:09.621: INFO: Created: latency-svc-zd24b
Jul  7 17:05:09.691: INFO: Got endpoints: latency-svc-zd24b [1.090435774s]
Jul  7 17:05:09.722: INFO: Created: latency-svc-jbj8c
Jul  7 17:05:09.735: INFO: Got endpoints: latency-svc-jbj8c [1.08577838s]
Jul  7 17:05:09.789: INFO: Created: latency-svc-lh2vd
Jul  7 17:05:09.828: INFO: Got endpoints: latency-svc-lh2vd [1.142248273s]
Jul  7 17:05:09.853: INFO: Created: latency-svc-gdg95
Jul  7 17:05:09.883: INFO: Got endpoints: latency-svc-gdg95 [1.085504806s]
Jul  7 17:05:09.925: INFO: Created: latency-svc-r5zgk
Jul  7 17:05:09.990: INFO: Got endpoints: latency-svc-r5zgk [1.132475522s]
Jul  7 17:05:10.009: INFO: Created: latency-svc-6g2kj
Jul  7 17:05:10.036: INFO: Got endpoints: latency-svc-6g2kj [1.085507031s]
Jul  7 17:05:10.074: INFO: Created: latency-svc-cqtg7
Jul  7 17:05:10.122: INFO: Got endpoints: latency-svc-cqtg7 [1.097851005s]
Jul  7 17:05:10.146: INFO: Created: latency-svc-5rmjc
Jul  7 17:05:10.165: INFO: Got endpoints: latency-svc-5rmjc [1.057171249s]
Jul  7 17:05:10.188: INFO: Created: latency-svc-lk2qg
Jul  7 17:05:10.199: INFO: Got endpoints: latency-svc-lk2qg [1.029920366s]
Jul  7 17:05:10.285: INFO: Created: latency-svc-ttpzk
Jul  7 17:05:10.320: INFO: Got endpoints: latency-svc-ttpzk [1.078830222s]
Jul  7 17:05:10.404: INFO: Created: latency-svc-c9f7g
Jul  7 17:05:10.422: INFO: Got endpoints: latency-svc-c9f7g [1.108890804s]
Jul  7 17:05:10.494: INFO: Created: latency-svc-rp45t
Jul  7 17:05:10.546: INFO: Got endpoints: latency-svc-rp45t [1.148001954s]
Jul  7 17:05:10.566: INFO: Created: latency-svc-pkm4t
Jul  7 17:05:10.585: INFO: Got endpoints: latency-svc-pkm4t [1.120874999s]
Jul  7 17:05:10.626: INFO: Created: latency-svc-vg79c
Jul  7 17:05:10.720: INFO: Got endpoints: latency-svc-vg79c [1.184690645s]
Jul  7 17:05:10.722: INFO: Created: latency-svc-r7mrm
Jul  7 17:05:10.735: INFO: Got endpoints: latency-svc-r7mrm [1.144714743s]
Jul  7 17:05:11.166: INFO: Created: latency-svc-568js
Jul  7 17:05:11.197: INFO: Got endpoints: latency-svc-568js [1.506261828s]
Jul  7 17:05:11.256: INFO: Created: latency-svc-lc2k2
Jul  7 17:05:11.259: INFO: Got endpoints: latency-svc-lc2k2 [1.523997757s]
Jul  7 17:05:11.382: INFO: Created: latency-svc-tbjpw
Jul  7 17:05:11.408: INFO: Got endpoints: latency-svc-tbjpw [1.579810512s]
Jul  7 17:05:11.671: INFO: Created: latency-svc-t9dsf
Jul  7 17:05:11.834: INFO: Got endpoints: latency-svc-t9dsf [1.951225168s]
Jul  7 17:05:11.844: INFO: Created: latency-svc-862qr
Jul  7 17:05:11.888: INFO: Got endpoints: latency-svc-862qr [1.898130025s]
Jul  7 17:05:12.018: INFO: Created: latency-svc-68z8g
Jul  7 17:05:12.043: INFO: Got endpoints: latency-svc-68z8g [2.006841585s]
Jul  7 17:05:12.133: INFO: Created: latency-svc-tsp74
Jul  7 17:05:12.170: INFO: Got endpoints: latency-svc-tsp74 [2.048095887s]
Jul  7 17:05:12.199: INFO: Created: latency-svc-z7999
Jul  7 17:05:12.212: INFO: Got endpoints: latency-svc-z7999 [2.046300872s]
Jul  7 17:05:12.283: INFO: Created: latency-svc-n2fsw
Jul  7 17:05:12.297: INFO: Got endpoints: latency-svc-n2fsw [2.097878626s]
Jul  7 17:05:12.330: INFO: Created: latency-svc-m8djf
Jul  7 17:05:12.345: INFO: Got endpoints: latency-svc-m8djf [2.025137937s]
Jul  7 17:05:12.372: INFO: Created: latency-svc-6c4ll
Jul  7 17:05:12.439: INFO: Got endpoints: latency-svc-6c4ll [2.016393562s]
Jul  7 17:05:12.468: INFO: Created: latency-svc-7z2wp
Jul  7 17:05:12.522: INFO: Got endpoints: latency-svc-7z2wp [1.975175804s]
Jul  7 17:05:12.661: INFO: Created: latency-svc-dbj5m
Jul  7 17:05:12.688: INFO: Got endpoints: latency-svc-dbj5m [2.103067659s]
Jul  7 17:05:12.744: INFO: Created: latency-svc-mk68b
Jul  7 17:05:12.988: INFO: Got endpoints: latency-svc-mk68b [466.408513ms]
Jul  7 17:05:13.171: INFO: Created: latency-svc-7rrp9
Jul  7 17:05:13.175: INFO: Got endpoints: latency-svc-7rrp9 [2.454842985s]
Jul  7 17:05:13.268: INFO: Created: latency-svc-jd5k2
Jul  7 17:05:13.350: INFO: Got endpoints: latency-svc-jd5k2 [2.614569886s]
Jul  7 17:05:13.546: INFO: Created: latency-svc-rqpl8
Jul  7 17:05:13.550: INFO: Got endpoints: latency-svc-rqpl8 [2.352619186s]
Jul  7 17:05:13.628: INFO: Created: latency-svc-glm8d
Jul  7 17:05:13.643: INFO: Got endpoints: latency-svc-glm8d [2.383768243s]
Jul  7 17:05:13.716: INFO: Created: latency-svc-ms5rf
Jul  7 17:05:13.745: INFO: Got endpoints: latency-svc-ms5rf [2.33692357s]
Jul  7 17:05:13.865: INFO: Created: latency-svc-cq22s
Jul  7 17:05:13.871: INFO: Got endpoints: latency-svc-cq22s [2.036362123s]
Jul  7 17:05:13.945: INFO: Created: latency-svc-c7v45
Jul  7 17:05:14.044: INFO: Got endpoints: latency-svc-c7v45 [2.15604931s]
Jul  7 17:05:14.089: INFO: Created: latency-svc-ckzr7
Jul  7 17:05:14.103: INFO: Got endpoints: latency-svc-ckzr7 [2.059397109s]
Jul  7 17:05:14.131: INFO: Created: latency-svc-kn2nh
Jul  7 17:05:14.181: INFO: Got endpoints: latency-svc-kn2nh [2.011213502s]
Jul  7 17:05:14.197: INFO: Created: latency-svc-dlv2q
Jul  7 17:05:14.214: INFO: Got endpoints: latency-svc-dlv2q [2.002272503s]
Jul  7 17:05:14.244: INFO: Created: latency-svc-x6rwq
Jul  7 17:05:14.361: INFO: Got endpoints: latency-svc-x6rwq [2.064284087s]
Jul  7 17:05:14.401: INFO: Created: latency-svc-j85bl
Jul  7 17:05:14.431: INFO: Got endpoints: latency-svc-j85bl [2.08588082s]
Jul  7 17:05:14.517: INFO: Created: latency-svc-wqp2k
Jul  7 17:05:14.533: INFO: Got endpoints: latency-svc-wqp2k [2.094837992s]
Jul  7 17:05:14.599: INFO: Created: latency-svc-j5dcv
Jul  7 17:05:14.612: INFO: Got endpoints: latency-svc-j5dcv [1.924122245s]
Jul  7 17:05:14.667: INFO: Created: latency-svc-frwn8
Jul  7 17:05:14.684: INFO: Got endpoints: latency-svc-frwn8 [1.695298183s]
Jul  7 17:05:14.736: INFO: Created: latency-svc-8thcx
Jul  7 17:05:14.913: INFO: Got endpoints: latency-svc-8thcx [1.737220236s]
Jul  7 17:05:15.170: INFO: Created: latency-svc-4p5vj
Jul  7 17:05:15.463: INFO: Got endpoints: latency-svc-4p5vj [2.113374336s]
Jul  7 17:05:15.522: INFO: Created: latency-svc-9klps
Jul  7 17:05:15.607: INFO: Got endpoints: latency-svc-9klps [2.05713995s]
Jul  7 17:05:15.661: INFO: Created: latency-svc-vmqdv
Jul  7 17:05:15.676: INFO: Got endpoints: latency-svc-vmqdv [2.032938256s]
Jul  7 17:05:15.805: INFO: Created: latency-svc-qxd97
Jul  7 17:05:15.856: INFO: Got endpoints: latency-svc-qxd97 [2.110846376s]
Jul  7 17:05:16.225: INFO: Created: latency-svc-smv4n
Jul  7 17:05:16.275: INFO: Got endpoints: latency-svc-smv4n [2.404322559s]
Jul  7 17:05:16.459: INFO: Created: latency-svc-2b9jg
Jul  7 17:05:16.509: INFO: Got endpoints: latency-svc-2b9jg [2.465540911s]
Jul  7 17:05:16.622: INFO: Created: latency-svc-njp9b
Jul  7 17:05:16.647: INFO: Got endpoints: latency-svc-njp9b [2.544596684s]
Jul  7 17:05:16.675: INFO: Created: latency-svc-svvc4
Jul  7 17:05:16.690: INFO: Got endpoints: latency-svc-svvc4 [2.508903729s]
Jul  7 17:05:16.760: INFO: Created: latency-svc-xgs2r
Jul  7 17:05:16.786: INFO: Got endpoints: latency-svc-xgs2r [2.572091701s]
Jul  7 17:05:16.934: INFO: Created: latency-svc-v5lpb
Jul  7 17:05:16.955: INFO: Got endpoints: latency-svc-v5lpb [2.593123806s]
Jul  7 17:05:17.116: INFO: Created: latency-svc-7gcqd
Jul  7 17:05:17.120: INFO: Got endpoints: latency-svc-7gcqd [2.689059722s]
Jul  7 17:05:17.416: INFO: Created: latency-svc-9c77q
Jul  7 17:05:17.465: INFO: Got endpoints: latency-svc-9c77q [2.931457027s]
Jul  7 17:05:17.637: INFO: Created: latency-svc-gg8hl
Jul  7 17:05:17.736: INFO: Got endpoints: latency-svc-gg8hl [3.123764846s]
Jul  7 17:05:17.883: INFO: Created: latency-svc-bfnfp
Jul  7 17:05:17.914: INFO: Got endpoints: latency-svc-bfnfp [3.230886649s]
Jul  7 17:05:18.087: INFO: Created: latency-svc-mqbss
Jul  7 17:05:18.090: INFO: Got endpoints: latency-svc-mqbss [3.177460976s]
Jul  7 17:05:18.586: INFO: Created: latency-svc-w2cm6
Jul  7 17:05:18.598: INFO: Got endpoints: latency-svc-w2cm6 [3.135221802s]
Jul  7 17:05:18.674: INFO: Created: latency-svc-xbsz8
Jul  7 17:05:19.111: INFO: Got endpoints: latency-svc-xbsz8 [3.504260917s]
Jul  7 17:05:19.114: INFO: Created: latency-svc-46prz
Jul  7 17:05:19.601: INFO: Got endpoints: latency-svc-46prz [3.925295717s]
Jul  7 17:05:19.978: INFO: Created: latency-svc-mslw8
Jul  7 17:05:20.247: INFO: Got endpoints: latency-svc-mslw8 [4.39137808s]
Jul  7 17:05:20.307: INFO: Created: latency-svc-dnt2k
Jul  7 17:05:20.432: INFO: Got endpoints: latency-svc-dnt2k [4.157312906s]
Jul  7 17:05:20.571: INFO: Created: latency-svc-4vdh2
Jul  7 17:05:20.574: INFO: Got endpoints: latency-svc-4vdh2 [4.064086098s]
Jul  7 17:05:20.634: INFO: Created: latency-svc-tz4km
Jul  7 17:05:20.655: INFO: Got endpoints: latency-svc-tz4km [4.007800332s]
Jul  7 17:05:20.764: INFO: Created: latency-svc-8w4w6
Jul  7 17:05:21.003: INFO: Got endpoints: latency-svc-8w4w6 [4.313058465s]
Jul  7 17:05:21.436: INFO: Created: latency-svc-vhc97
Jul  7 17:05:21.454: INFO: Got endpoints: latency-svc-vhc97 [4.667325879s]
Jul  7 17:05:21.720: INFO: Created: latency-svc-lpqsl
Jul  7 17:05:21.723: INFO: Got endpoints: latency-svc-lpqsl [4.768617228s]
Jul  7 17:05:21.820: INFO: Created: latency-svc-h5hhc
Jul  7 17:05:22.044: INFO: Got endpoints: latency-svc-h5hhc [4.923874759s]
Jul  7 17:05:22.247: INFO: Created: latency-svc-fb4nt
Jul  7 17:05:22.310: INFO: Got endpoints: latency-svc-fb4nt [4.845116274s]
Jul  7 17:05:22.751: INFO: Created: latency-svc-2dv54
Jul  7 17:05:22.797: INFO: Got endpoints: latency-svc-2dv54 [5.061234454s]
Jul  7 17:05:23.040: INFO: Created: latency-svc-pkdcj
Jul  7 17:05:23.087: INFO: Got endpoints: latency-svc-pkdcj [5.172789539s]
Jul  7 17:05:23.130: INFO: Created: latency-svc-tsvlt
Jul  7 17:05:23.199: INFO: Got endpoints: latency-svc-tsvlt [5.108791334s]
Jul  7 17:05:23.246: INFO: Created: latency-svc-2qrk4
Jul  7 17:05:23.274: INFO: Got endpoints: latency-svc-2qrk4 [4.675386228s]
Jul  7 17:05:23.337: INFO: Created: latency-svc-nw9t5
Jul  7 17:05:23.382: INFO: Got endpoints: latency-svc-nw9t5 [4.270712685s]
Jul  7 17:05:23.383: INFO: Created: latency-svc-cjp65
Jul  7 17:05:23.416: INFO: Got endpoints: latency-svc-cjp65 [3.814923672s]
Jul  7 17:05:23.571: INFO: Created: latency-svc-86qsf
Jul  7 17:05:23.620: INFO: Got endpoints: latency-svc-86qsf [3.372513174s]
Jul  7 17:05:23.708: INFO: Created: latency-svc-vrqjh
Jul  7 17:05:23.711: INFO: Got endpoints: latency-svc-vrqjh [3.278345576s]
Jul  7 17:05:23.796: INFO: Created: latency-svc-rknvw
Jul  7 17:05:23.852: INFO: Got endpoints: latency-svc-rknvw [3.278700514s]
Jul  7 17:05:23.917: INFO: Created: latency-svc-xrh5b
Jul  7 17:05:23.990: INFO: Got endpoints: latency-svc-xrh5b [3.33436902s]
Jul  7 17:05:24.042: INFO: Created: latency-svc-fdqx9
Jul  7 17:05:24.177: INFO: Got endpoints: latency-svc-fdqx9 [3.173584173s]
Jul  7 17:05:24.191: INFO: Created: latency-svc-jk2gr
Jul  7 17:05:24.222: INFO: Got endpoints: latency-svc-jk2gr [2.767998319s]
Jul  7 17:05:24.272: INFO: Created: latency-svc-6rf9p
Jul  7 17:05:24.355: INFO: Got endpoints: latency-svc-6rf9p [2.631735848s]
Jul  7 17:05:24.428: INFO: Created: latency-svc-q4cz4
Jul  7 17:05:24.450: INFO: Got endpoints: latency-svc-q4cz4 [2.405755654s]
Jul  7 17:05:24.551: INFO: Created: latency-svc-lt99j
Jul  7 17:05:24.576: INFO: Got endpoints: latency-svc-lt99j [2.266052461s]
Jul  7 17:05:24.660: INFO: Created: latency-svc-xbtq7
Jul  7 17:05:24.667: INFO: Got endpoints: latency-svc-xbtq7 [1.869678459s]
Jul  7 17:05:24.704: INFO: Created: latency-svc-95zcj
Jul  7 17:05:24.733: INFO: Got endpoints: latency-svc-95zcj [1.645865061s]
Jul  7 17:05:24.816: INFO: Created: latency-svc-jbddm
Jul  7 17:05:24.872: INFO: Got endpoints: latency-svc-jbddm [1.672722757s]
Jul  7 17:05:24.909: INFO: Created: latency-svc-l6wjh
Jul  7 17:05:24.986: INFO: Got endpoints: latency-svc-l6wjh [1.712245677s]
Jul  7 17:05:25.160: INFO: Created: latency-svc-xjstf
Jul  7 17:05:25.332: INFO: Got endpoints: latency-svc-xjstf [1.949812457s]
Jul  7 17:05:25.822: INFO: Created: latency-svc-5ggs6
Jul  7 17:05:25.827: INFO: Got endpoints: latency-svc-5ggs6 [2.41039812s]
Jul  7 17:05:26.547: INFO: Created: latency-svc-jx588
Jul  7 17:05:26.619: INFO: Got endpoints: latency-svc-jx588 [2.998845434s]
Jul  7 17:05:26.744: INFO: Created: latency-svc-67fg4
Jul  7 17:05:26.793: INFO: Got endpoints: latency-svc-67fg4 [3.08180896s]
Jul  7 17:05:26.906: INFO: Created: latency-svc-ppfq5
Jul  7 17:05:26.935: INFO: Got endpoints: latency-svc-ppfq5 [3.082627587s]
Jul  7 17:05:26.992: INFO: Created: latency-svc-jb992
Jul  7 17:05:27.092: INFO: Got endpoints: latency-svc-jb992 [3.102280873s]
Jul  7 17:05:27.380: INFO: Created: latency-svc-ft8ns
Jul  7 17:05:27.698: INFO: Got endpoints: latency-svc-ft8ns [3.520527074s]
Jul  7 17:05:27.857: INFO: Created: latency-svc-p49bk
Jul  7 17:05:27.871: INFO: Got endpoints: latency-svc-p49bk [3.649043722s]
Jul  7 17:05:28.103: INFO: Created: latency-svc-sbsfc
Jul  7 17:05:28.378: INFO: Created: latency-svc-hqvcn
Jul  7 17:05:28.378: INFO: Got endpoints: latency-svc-sbsfc [4.023052459s]
Jul  7 17:05:28.673: INFO: Got endpoints: latency-svc-hqvcn [4.223219014s]
Jul  7 17:05:28.932: INFO: Created: latency-svc-m585k
Jul  7 17:05:29.134: INFO: Got endpoints: latency-svc-m585k [4.557655787s]
Jul  7 17:05:29.159: INFO: Created: latency-svc-pgpgj
Jul  7 17:05:29.319: INFO: Got endpoints: latency-svc-pgpgj [4.652428878s]
Jul  7 17:05:29.730: INFO: Created: latency-svc-2sjlr
Jul  7 17:05:30.122: INFO: Got endpoints: latency-svc-2sjlr [5.388677093s]
Jul  7 17:05:30.139: INFO: Created: latency-svc-dgwn2
Jul  7 17:05:30.303: INFO: Got endpoints: latency-svc-dgwn2 [5.431309456s]
Jul  7 17:05:30.834: INFO: Created: latency-svc-h66gf
Jul  7 17:05:30.861: INFO: Got endpoints: latency-svc-h66gf [5.874345152s]
Jul  7 17:05:31.098: INFO: Created: latency-svc-c8sx4
Jul  7 17:05:31.138: INFO: Got endpoints: latency-svc-c8sx4 [5.806130337s]
Jul  7 17:05:31.284: INFO: Created: latency-svc-wdrml
Jul  7 17:05:31.314: INFO: Got endpoints: latency-svc-wdrml [5.48690332s]
Jul  7 17:05:31.314: INFO: Created: latency-svc-h9cgq
Jul  7 17:05:31.344: INFO: Got endpoints: latency-svc-h9cgq [4.725051789s]
Jul  7 17:05:31.374: INFO: Created: latency-svc-td42r
Jul  7 17:05:31.415: INFO: Got endpoints: latency-svc-td42r [4.622005745s]
Jul  7 17:05:31.452: INFO: Created: latency-svc-ql99z
Jul  7 17:05:31.473: INFO: Got endpoints: latency-svc-ql99z [4.537987975s]
Jul  7 17:05:31.506: INFO: Created: latency-svc-cjmqw
Jul  7 17:05:31.613: INFO: Got endpoints: latency-svc-cjmqw [4.520807941s]
Jul  7 17:05:31.629: INFO: Created: latency-svc-9hwlk
Jul  7 17:05:31.699: INFO: Got endpoints: latency-svc-9hwlk [4.001415739s]
Jul  7 17:05:31.759: INFO: Created: latency-svc-lg45w
Jul  7 17:05:31.783: INFO: Got endpoints: latency-svc-lg45w [3.911943783s]
Jul  7 17:05:31.819: INFO: Created: latency-svc-dw9q8
Jul  7 17:05:31.837: INFO: Got endpoints: latency-svc-dw9q8 [3.458889416s]
Jul  7 17:05:31.885: INFO: Created: latency-svc-s6k7n
Jul  7 17:05:31.898: INFO: Got endpoints: latency-svc-s6k7n [3.224370462s]
Jul  7 17:05:32.069: INFO: Created: latency-svc-twkh5
Jul  7 17:05:32.071: INFO: Got endpoints: latency-svc-twkh5 [2.936831127s]
Jul  7 17:05:32.167: INFO: Created: latency-svc-rcq92
Jul  7 17:05:32.521: INFO: Got endpoints: latency-svc-rcq92 [3.201608001s]
Jul  7 17:05:32.761: INFO: Created: latency-svc-zbvzh
Jul  7 17:05:32.975: INFO: Got endpoints: latency-svc-zbvzh [2.853231379s]
Jul  7 17:05:33.327: INFO: Created: latency-svc-4rjs5
Jul  7 17:05:33.398: INFO: Got endpoints: latency-svc-4rjs5 [3.094292902s]
Jul  7 17:05:33.560: INFO: Created: latency-svc-jmsns
Jul  7 17:05:33.583: INFO: Got endpoints: latency-svc-jmsns [2.722259865s]
Jul  7 17:05:33.704: INFO: Created: latency-svc-l5xf9
Jul  7 17:05:33.866: INFO: Got endpoints: latency-svc-l5xf9 [2.727680844s]
Jul  7 17:05:34.135: INFO: Created: latency-svc-dnm4m
Jul  7 17:05:34.283: INFO: Got endpoints: latency-svc-dnm4m [2.969582528s]
Jul  7 17:05:34.446: INFO: Created: latency-svc-7jjbv
Jul  7 17:05:34.637: INFO: Got endpoints: latency-svc-7jjbv [3.293285715s]
Jul  7 17:05:34.812: INFO: Created: latency-svc-jmrs4
Jul  7 17:05:34.897: INFO: Got endpoints: latency-svc-jmrs4 [3.481908513s]
Jul  7 17:05:35.107: INFO: Created: latency-svc-rxlmz
Jul  7 17:05:35.148: INFO: Got endpoints: latency-svc-rxlmz [3.675360386s]
Jul  7 17:05:35.475: INFO: Created: latency-svc-t8msj
Jul  7 17:05:35.496: INFO: Got endpoints: latency-svc-t8msj [3.883385795s]
Jul  7 17:05:35.697: INFO: Created: latency-svc-kbm2r
Jul  7 17:05:35.731: INFO: Got endpoints: latency-svc-kbm2r [4.031444488s]
Jul  7 17:05:35.882: INFO: Created: latency-svc-6f8nb
Jul  7 17:05:35.937: INFO: Got endpoints: latency-svc-6f8nb [4.1540746s]
Jul  7 17:05:36.032: INFO: Created: latency-svc-rtsnm
Jul  7 17:05:36.037: INFO: Got endpoints: latency-svc-rtsnm [4.199858946s]
Jul  7 17:05:36.128: INFO: Created: latency-svc-vdpm9
Jul  7 17:05:36.169: INFO: Got endpoints: latency-svc-vdpm9 [4.271596629s]
Jul  7 17:05:36.202: INFO: Created: latency-svc-c4d4b
Jul  7 17:05:36.235: INFO: Got endpoints: latency-svc-c4d4b [4.164292098s]
Jul  7 17:05:36.350: INFO: Created: latency-svc-2hvrm
Jul  7 17:05:36.398: INFO: Got endpoints: latency-svc-2hvrm [3.876935372s]
Jul  7 17:05:36.399: INFO: Created: latency-svc-xm794
Jul  7 17:05:36.416: INFO: Got endpoints: latency-svc-xm794 [3.440571629s]
Jul  7 17:05:36.494: INFO: Created: latency-svc-sz5tb
Jul  7 17:05:36.530: INFO: Got endpoints: latency-svc-sz5tb [3.131921903s]
Jul  7 17:05:36.585: INFO: Created: latency-svc-d27ch
Jul  7 17:05:36.672: INFO: Got endpoints: latency-svc-d27ch [3.088923694s]
Jul  7 17:05:36.687: INFO: Created: latency-svc-pt87f
Jul  7 17:05:36.705: INFO: Got endpoints: latency-svc-pt87f [2.839322858s]
Jul  7 17:05:36.834: INFO: Created: latency-svc-pj5gm
Jul  7 17:05:36.843: INFO: Got endpoints: latency-svc-pj5gm [2.559349713s]
Jul  7 17:05:36.990: INFO: Created: latency-svc-mtgpt
Jul  7 17:05:37.000: INFO: Got endpoints: latency-svc-mtgpt [2.362308765s]
Jul  7 17:05:37.078: INFO: Created: latency-svc-b2ngk
Jul  7 17:05:37.175: INFO: Got endpoints: latency-svc-b2ngk [2.278429046s]
Jul  7 17:05:37.177: INFO: Created: latency-svc-nj6t8
Jul  7 17:05:37.258: INFO: Got endpoints: latency-svc-nj6t8 [2.109802072s]
Jul  7 17:05:37.373: INFO: Created: latency-svc-9gmzr
Jul  7 17:05:37.392: INFO: Got endpoints: latency-svc-9gmzr [1.895490194s]
Jul  7 17:05:37.829: INFO: Created: latency-svc-rnr9x
Jul  7 17:05:38.002: INFO: Got endpoints: latency-svc-rnr9x [2.271701275s]
Jul  7 17:05:38.088: INFO: Created: latency-svc-dfvg7
Jul  7 17:05:38.098: INFO: Got endpoints: latency-svc-dfvg7 [2.160701804s]
Jul  7 17:05:38.191: INFO: Created: latency-svc-66qcw
Jul  7 17:05:38.257: INFO: Got endpoints: latency-svc-66qcw [2.219727089s]
Jul  7 17:05:38.342: INFO: Created: latency-svc-p4btx
Jul  7 17:05:38.381: INFO: Got endpoints: latency-svc-p4btx [2.211511997s]
Jul  7 17:05:38.580: INFO: Created: latency-svc-8mdvn
Jul  7 17:05:38.633: INFO: Got endpoints: latency-svc-8mdvn [2.397613582s]
Jul  7 17:05:38.839: INFO: Created: latency-svc-b68bp
Jul  7 17:05:38.954: INFO: Got endpoints: latency-svc-b68bp [2.556285127s]
Jul  7 17:05:39.033: INFO: Created: latency-svc-98twc
Jul  7 17:05:39.128: INFO: Got endpoints: latency-svc-98twc [2.712133674s]
Jul  7 17:05:39.308: INFO: Created: latency-svc-cthjg
Jul  7 17:05:39.475: INFO: Got endpoints: latency-svc-cthjg [2.94506129s]
Jul  7 17:05:39.525: INFO: Created: latency-svc-6smbb
Jul  7 17:05:39.547: INFO: Got endpoints: latency-svc-6smbb [2.874588618s]
Jul  7 17:05:39.664: INFO: Created: latency-svc-hndcg
Jul  7 17:05:39.678: INFO: Got endpoints: latency-svc-hndcg [2.972888131s]
Jul  7 17:05:39.841: INFO: Created: latency-svc-hxd2j
Jul  7 17:05:39.844: INFO: Got endpoints: latency-svc-hxd2j [3.000934857s]
Jul  7 17:05:40.165: INFO: Created: latency-svc-fzjl2
Jul  7 17:05:40.231: INFO: Got endpoints: latency-svc-fzjl2 [3.230944557s]
Jul  7 17:05:40.578: INFO: Created: latency-svc-7hm4f
Jul  7 17:05:41.189: INFO: Got endpoints: latency-svc-7hm4f [4.013847549s]
Jul  7 17:05:42.067: INFO: Created: latency-svc-ztqhz
Jul  7 17:05:42.118: INFO: Got endpoints: latency-svc-ztqhz [4.860113585s]
Jul  7 17:05:42.764: INFO: Created: latency-svc-5zvj8
Jul  7 17:05:42.776: INFO: Got endpoints: latency-svc-5zvj8 [5.384395781s]
Jul  7 17:05:43.209: INFO: Created: latency-svc-h2nn6
Jul  7 17:05:43.256: INFO: Got endpoints: latency-svc-h2nn6 [5.25360952s]
Jul  7 17:05:43.619: INFO: Created: latency-svc-b9wwg
Jul  7 17:05:43.804: INFO: Created: latency-svc-jbmcr
Jul  7 17:05:43.805: INFO: Got endpoints: latency-svc-b9wwg [5.706794256s]
Jul  7 17:05:43.844: INFO: Got endpoints: latency-svc-jbmcr [5.587537951s]
Jul  7 17:05:43.996: INFO: Created: latency-svc-c8b8f
Jul  7 17:05:44.082: INFO: Got endpoints: latency-svc-c8b8f [5.701083824s]
Jul  7 17:05:44.190: INFO: Created: latency-svc-hmgxx
Jul  7 17:05:44.752: INFO: Got endpoints: latency-svc-hmgxx [6.118618798s]
Jul  7 17:05:45.255: INFO: Created: latency-svc-l79vk
Jul  7 17:05:45.295: INFO: Got endpoints: latency-svc-l79vk [6.341135763s]
Jul  7 17:05:45.792: INFO: Created: latency-svc-pgn7l
Jul  7 17:05:45.895: INFO: Got endpoints: latency-svc-pgn7l [6.767351199s]
Jul  7 17:05:46.124: INFO: Created: latency-svc-h8cc4
Jul  7 17:05:46.380: INFO: Got endpoints: latency-svc-h8cc4 [6.904638383s]
Jul  7 17:05:46.600: INFO: Created: latency-svc-wqhcg
Jul  7 17:05:46.859: INFO: Got endpoints: latency-svc-wqhcg [7.312440905s]
Jul  7 17:05:47.118: INFO: Created: latency-svc-f684b
Jul  7 17:05:47.123: INFO: Got endpoints: latency-svc-f684b [7.44470357s]
Jul  7 17:05:47.344: INFO: Created: latency-svc-c7jzr
Jul  7 17:05:47.347: INFO: Got endpoints: latency-svc-c7jzr [7.503039756s]
Jul  7 17:05:47.997: INFO: Created: latency-svc-xnvtz
Jul  7 17:05:48.000: INFO: Got endpoints: latency-svc-xnvtz [7.76941611s]
Jul  7 17:05:48.255: INFO: Created: latency-svc-8fs7s
Jul  7 17:05:48.287: INFO: Got endpoints: latency-svc-8fs7s [7.097405365s]
Jul  7 17:05:48.452: INFO: Created: latency-svc-hbprc
Jul  7 17:05:48.462: INFO: Got endpoints: latency-svc-hbprc [6.343909387s]
Jul  7 17:05:48.518: INFO: Created: latency-svc-s9xhv
Jul  7 17:05:48.697: INFO: Got endpoints: latency-svc-s9xhv [5.920114463s]
Jul  7 17:05:49.376: INFO: Created: latency-svc-mgjgl
Jul  7 17:05:49.896: INFO: Got endpoints: latency-svc-mgjgl [6.639913521s]
Jul  7 17:05:50.295: INFO: Created: latency-svc-kb8dq
Jul  7 17:05:50.660: INFO: Got endpoints: latency-svc-kb8dq [6.855192491s]
Jul  7 17:05:51.050: INFO: Created: latency-svc-nqc7s
Jul  7 17:05:51.344: INFO: Got endpoints: latency-svc-nqc7s [7.499307575s]
Jul  7 17:05:51.776: INFO: Created: latency-svc-frrgh
Jul  7 17:05:52.038: INFO: Got endpoints: latency-svc-frrgh [7.956195668s]
Jul  7 17:05:52.095: INFO: Created: latency-svc-g7kqx
Jul  7 17:05:52.866: INFO: Got endpoints: latency-svc-g7kqx [8.114777853s]
Jul  7 17:05:52.952: INFO: Created: latency-svc-j74t4
Jul  7 17:05:53.176: INFO: Got endpoints: latency-svc-j74t4 [7.881004645s]
Jul  7 17:05:53.177: INFO: Latencies: [466.408513ms 535.231954ms 611.944226ms 858.042578ms 1.029920366s 1.057171249s 1.078830222s 1.085504806s 1.085507031s 1.08577838s 1.090435774s 1.097851005s 1.108890804s 1.10941464s 1.120874999s 1.132475522s 1.142248273s 1.144714743s 1.148001954s 1.184690645s 1.221979774s 1.506261828s 1.523997757s 1.579810512s 1.619065604s 1.645865061s 1.672722757s 1.695298183s 1.712245677s 1.737220236s 1.856336167s 1.869678459s 1.895490194s 1.898130025s 1.924122245s 1.949812457s 1.951225168s 1.966399546s 1.975175804s 2.002272503s 2.006841585s 2.011213502s 2.016393562s 2.025137937s 2.032938256s 2.036362123s 2.046300872s 2.048095887s 2.05713995s 2.059397109s 2.064284087s 2.08588082s 2.094837992s 2.097878626s 2.103067659s 2.109802072s 2.110846376s 2.113374336s 2.138416499s 2.15604931s 2.160701804s 2.211511997s 2.219727089s 2.266052461s 2.271701275s 2.278429046s 2.33692357s 2.352619186s 2.362308765s 2.383768243s 2.397613582s 2.404322559s 2.405755654s 2.41039812s 2.454842985s 2.465540911s 2.508903729s 2.544596684s 2.556285127s 2.559349713s 2.572091701s 2.593123806s 2.614569886s 2.631735848s 2.673025569s 2.689059722s 2.712133674s 2.722259865s 2.727680844s 2.767998319s 2.81452191s 2.839322858s 2.840264468s 2.853231379s 2.874588618s 2.931457027s 2.936831127s 2.94506129s 2.959179842s 2.969582528s 2.972888131s 2.988616996s 2.993425663s 2.998845434s 3.000934857s 3.008667041s 3.045284566s 3.08180896s 3.082627587s 3.088923694s 3.094292902s 3.102280873s 3.123764846s 3.131921903s 3.135221802s 3.156495303s 3.173584173s 3.177460976s 3.201608001s 3.216038266s 3.224370462s 3.230886649s 3.230944557s 3.278345576s 3.278700514s 3.293285715s 3.310444639s 3.33436902s 3.372513174s 3.38270222s 3.440571629s 3.458889416s 3.467304351s 3.481908513s 3.504260917s 3.520527074s 3.649043722s 3.675360386s 3.814923672s 3.876935372s 3.883385795s 3.911943783s 3.925295717s 4.001415739s 4.007800332s 4.013847549s 4.023052459s 4.031444488s 4.064086098s 4.1540746s 4.157312906s 4.164292098s 4.199858946s 4.223219014s 4.270712685s 4.271596629s 4.313058465s 4.39137808s 4.520807941s 4.537987975s 4.557655787s 4.622005745s 4.652428878s 4.667325879s 4.675386228s 4.725051789s 4.768617228s 4.845116274s 4.860113585s 4.923874759s 5.061234454s 5.108791334s 5.172789539s 5.25360952s 5.384395781s 5.388677093s 5.431309456s 5.48690332s 5.587537951s 5.701083824s 5.706794256s 5.806130337s 5.874345152s 5.920114463s 6.118618798s 6.341135763s 6.343909387s 6.639913521s 6.767351199s 6.855192491s 6.904638383s 7.097405365s 7.312440905s 7.44470357s 7.499307575s 7.503039756s 7.76941611s 7.881004645s 7.956195668s 8.114777853s]
Jul  7 17:05:53.177: INFO: 50 %ile: 2.972888131s
Jul  7 17:05:53.177: INFO: 90 %ile: 5.706794256s
Jul  7 17:05:53.177: INFO: 99 %ile: 7.956195668s
Jul  7 17:05:53.177: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 17:05:53.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-224" for this suite.

• [SLOW TEST:55.857 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":278,"completed":264,"skipped":4453,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 17:05:53.247: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 17:06:09.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1348" for this suite.

• [SLOW TEST:17.121 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":265,"skipped":4453,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 17:06:10.368: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Jul  7 17:06:13.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8134'
Jul  7 17:06:35.437: INFO: stderr: ""
Jul  7 17:06:35.437: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul  7 17:06:35.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8134'
Jul  7 17:06:35.652: INFO: stderr: ""
Jul  7 17:06:35.652: INFO: stdout: "update-demo-nautilus-6vv22 update-demo-nautilus-w56st "
Jul  7 17:06:35.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6vv22 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8134'
Jul  7 17:06:35.838: INFO: stderr: ""
Jul  7 17:06:35.838: INFO: stdout: ""
Jul  7 17:06:35.838: INFO: update-demo-nautilus-6vv22 is created but not running
Jul  7 17:06:40.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8134'
Jul  7 17:06:41.128: INFO: stderr: ""
Jul  7 17:06:41.128: INFO: stdout: "update-demo-nautilus-6vv22 update-demo-nautilus-w56st "
Jul  7 17:06:41.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6vv22 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8134'
Jul  7 17:06:41.599: INFO: stderr: ""
Jul  7 17:06:41.599: INFO: stdout: ""
Jul  7 17:06:41.599: INFO: update-demo-nautilus-6vv22 is created but not running
Jul  7 17:06:46.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8134'
Jul  7 17:06:46.748: INFO: stderr: ""
Jul  7 17:06:46.748: INFO: stdout: "update-demo-nautilus-6vv22 update-demo-nautilus-w56st "
Jul  7 17:06:46.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6vv22 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8134'
Jul  7 17:06:46.929: INFO: stderr: ""
Jul  7 17:06:46.929: INFO: stdout: "true"
Jul  7 17:06:46.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6vv22 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8134'
Jul  7 17:06:47.126: INFO: stderr: ""
Jul  7 17:06:47.126: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  7 17:06:47.126: INFO: validating pod update-demo-nautilus-6vv22
Jul  7 17:06:47.568: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  7 17:06:47.568: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  7 17:06:47.568: INFO: update-demo-nautilus-6vv22 is verified up and running
Jul  7 17:06:47.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w56st -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8134'
Jul  7 17:06:47.818: INFO: stderr: ""
Jul  7 17:06:47.818: INFO: stdout: "true"
Jul  7 17:06:47.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w56st -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8134'
Jul  7 17:06:48.226: INFO: stderr: ""
Jul  7 17:06:48.226: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  7 17:06:48.226: INFO: validating pod update-demo-nautilus-w56st
Jul  7 17:06:48.388: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  7 17:06:48.388: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  7 17:06:48.388: INFO: update-demo-nautilus-w56st is verified up and running
STEP: using delete to clean up resources
Jul  7 17:06:48.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8134'
Jul  7 17:06:48.572: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  7 17:06:48.572: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jul  7 17:06:48.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8134'
Jul  7 17:06:48.734: INFO: stderr: "No resources found in kubectl-8134 namespace.\n"
Jul  7 17:06:48.734: INFO: stdout: ""
Jul  7 17:06:48.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8134 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul  7 17:06:48.863: INFO: stderr: ""
Jul  7 17:06:48.863: INFO: stdout: "update-demo-nautilus-6vv22\nupdate-demo-nautilus-w56st\n"
Jul  7 17:06:49.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8134'
Jul  7 17:06:49.525: INFO: stderr: "No resources found in kubectl-8134 namespace.\n"
Jul  7 17:06:49.525: INFO: stdout: ""
Jul  7 17:06:49.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8134 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul  7 17:06:49.638: INFO: stderr: ""
Jul  7 17:06:49.638: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 17:06:49.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8134" for this suite.

• [SLOW TEST:39.334 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":278,"completed":266,"skipped":4466,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 17:06:49.703: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-b38aab4a-045b-4e0e-a41e-805adef9d5a4
STEP: Creating a pod to test consume secrets
Jul  7 17:06:50.792: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4e40e85c-6763-4d26-a15d-e639df66ec1b" in namespace "projected-8421" to be "success or failure"
Jul  7 17:06:50.920: INFO: Pod "pod-projected-secrets-4e40e85c-6763-4d26-a15d-e639df66ec1b": Phase="Pending", Reason="", readiness=false. Elapsed: 127.39399ms
Jul  7 17:06:53.370: INFO: Pod "pod-projected-secrets-4e40e85c-6763-4d26-a15d-e639df66ec1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.5777405s
Jul  7 17:06:55.801: INFO: Pod "pod-projected-secrets-4e40e85c-6763-4d26-a15d-e639df66ec1b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.00886838s
Jul  7 17:06:58.039: INFO: Pod "pod-projected-secrets-4e40e85c-6763-4d26-a15d-e639df66ec1b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.246879426s
Jul  7 17:07:00.079: INFO: Pod "pod-projected-secrets-4e40e85c-6763-4d26-a15d-e639df66ec1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.28662582s
STEP: Saw pod success
Jul  7 17:07:00.079: INFO: Pod "pod-projected-secrets-4e40e85c-6763-4d26-a15d-e639df66ec1b" satisfied condition "success or failure"
Jul  7 17:07:00.084: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-4e40e85c-6763-4d26-a15d-e639df66ec1b container projected-secret-volume-test: 
STEP: delete the pod
Jul  7 17:07:00.209: INFO: Waiting for pod pod-projected-secrets-4e40e85c-6763-4d26-a15d-e639df66ec1b to disappear
Jul  7 17:07:00.212: INFO: Pod pod-projected-secrets-4e40e85c-6763-4d26-a15d-e639df66ec1b no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 17:07:00.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8421" for this suite.

• [SLOW TEST:10.580 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":267,"skipped":4471,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 17:07:00.283: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul  7 17:07:00.500: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df5a6606-ca91-49eb-90ca-51a40bd0b572" in namespace "downward-api-7054" to be "success or failure"
Jul  7 17:07:00.506: INFO: Pod "downwardapi-volume-df5a6606-ca91-49eb-90ca-51a40bd0b572": Phase="Pending", Reason="", readiness=false. Elapsed: 5.795377ms
Jul  7 17:07:02.628: INFO: Pod "downwardapi-volume-df5a6606-ca91-49eb-90ca-51a40bd0b572": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127966327s
Jul  7 17:07:04.660: INFO: Pod "downwardapi-volume-df5a6606-ca91-49eb-90ca-51a40bd0b572": Phase="Pending", Reason="", readiness=false. Elapsed: 4.159361448s
Jul  7 17:07:06.674: INFO: Pod "downwardapi-volume-df5a6606-ca91-49eb-90ca-51a40bd0b572": Phase="Running", Reason="", readiness=true. Elapsed: 6.174106984s
Jul  7 17:07:08.689: INFO: Pod "downwardapi-volume-df5a6606-ca91-49eb-90ca-51a40bd0b572": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.188801155s
STEP: Saw pod success
Jul  7 17:07:08.689: INFO: Pod "downwardapi-volume-df5a6606-ca91-49eb-90ca-51a40bd0b572" satisfied condition "success or failure"
Jul  7 17:07:08.691: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-df5a6606-ca91-49eb-90ca-51a40bd0b572 container client-container: 
STEP: delete the pod
Jul  7 17:07:08.812: INFO: Waiting for pod downwardapi-volume-df5a6606-ca91-49eb-90ca-51a40bd0b572 to disappear
Jul  7 17:07:08.815: INFO: Pod downwardapi-volume-df5a6606-ca91-49eb-90ca-51a40bd0b572 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 17:07:08.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7054" for this suite.

• [SLOW TEST:8.551 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":268,"skipped":4477,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 17:07:08.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name secret-emptykey-test-cd119aa8-e667-4b49-9b96-dc94925f6b2c
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 17:07:08.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3561" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":269,"skipped":4477,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 17:07:09.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-f622cb7e-fbd8-4054-9caa-7d407da8070f
STEP: Creating a pod to test consume secrets
Jul  7 17:07:09.389: INFO: Waiting up to 5m0s for pod "pod-secrets-b00586b0-9ef0-44b1-b468-bffa48d7c4ff" in namespace "secrets-8229" to be "success or failure"
Jul  7 17:07:09.451: INFO: Pod "pod-secrets-b00586b0-9ef0-44b1-b468-bffa48d7c4ff": Phase="Pending", Reason="", readiness=false. Elapsed: 62.272355ms
Jul  7 17:07:12.417: INFO: Pod "pod-secrets-b00586b0-9ef0-44b1-b468-bffa48d7c4ff": Phase="Pending", Reason="", readiness=false. Elapsed: 3.028177172s
Jul  7 17:07:14.958: INFO: Pod "pod-secrets-b00586b0-9ef0-44b1-b468-bffa48d7c4ff": Phase="Pending", Reason="", readiness=false. Elapsed: 5.568687252s
Jul  7 17:07:17.088: INFO: Pod "pod-secrets-b00586b0-9ef0-44b1-b468-bffa48d7c4ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.699388485s
STEP: Saw pod success
Jul  7 17:07:17.088: INFO: Pod "pod-secrets-b00586b0-9ef0-44b1-b468-bffa48d7c4ff" satisfied condition "success or failure"
Jul  7 17:07:17.262: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-b00586b0-9ef0-44b1-b468-bffa48d7c4ff container secret-volume-test: 
STEP: delete the pod
Jul  7 17:07:18.722: INFO: Waiting for pod pod-secrets-b00586b0-9ef0-44b1-b468-bffa48d7c4ff to disappear
Jul  7 17:07:18.776: INFO: Pod pod-secrets-b00586b0-9ef0-44b1-b468-bffa48d7c4ff no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 17:07:18.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8229" for this suite.

• [SLOW TEST:10.480 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":270,"skipped":4479,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 17:07:19.496: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Jul  7 17:07:21.776: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Jul  7 17:07:24.979: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729738441, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729738441, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729738441, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729738441, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 17:07:27.027: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729738441, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729738441, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729738441, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729738441, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  7 17:07:30.270: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  7 17:07:30.326: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 17:07:31.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-8973" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:12.457 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":271,"skipped":4495,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 17:07:31.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Jul  7 17:07:33.308: INFO: namespace kubectl-8214
Jul  7 17:07:33.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8214'
Jul  7 17:07:33.939: INFO: stderr: ""
Jul  7 17:07:33.939: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jul  7 17:07:34.947: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  7 17:07:34.947: INFO: Found 0 / 1
Jul  7 17:07:36.276: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  7 17:07:36.276: INFO: Found 0 / 1
Jul  7 17:07:36.952: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  7 17:07:36.952: INFO: Found 0 / 1
Jul  7 17:07:38.142: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  7 17:07:38.142: INFO: Found 0 / 1
Jul  7 17:07:38.955: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  7 17:07:38.955: INFO: Found 1 / 1
Jul  7 17:07:38.955: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jul  7 17:07:38.982: INFO: Selector matched 1 pods for map[app:agnhost]
Jul  7 17:07:38.982: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jul  7 17:07:38.982: INFO: wait on agnhost-master startup in kubectl-8214 
Jul  7 17:07:38.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-mgn2d agnhost-master --namespace=kubectl-8214'
Jul  7 17:07:39.115: INFO: stderr: ""
Jul  7 17:07:39.115: INFO: stdout: "Paused\n"
STEP: exposing RC
Jul  7 17:07:39.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-8214'
Jul  7 17:07:39.401: INFO: stderr: ""
Jul  7 17:07:39.401: INFO: stdout: "service/rm2 exposed\n"
Jul  7 17:07:39.609: INFO: Service rm2 in namespace kubectl-8214 found.
STEP: exposing service
Jul  7 17:07:42.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-8214'
Jul  7 17:07:42.597: INFO: stderr: ""
Jul  7 17:07:42.597: INFO: stdout: "service/rm3 exposed\n"
Jul  7 17:07:42.615: INFO: Service rm3 in namespace kubectl-8214 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 17:07:44.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8214" for this suite.

• [SLOW TEST:12.725 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1188
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":278,"completed":272,"skipped":4503,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 17:07:44.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  7 17:07:45.845: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul  7 17:07:47.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729738465, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729738465, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729738465, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729738465, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 17:07:49.863: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729738465, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729738465, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729738465, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729738465, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  7 17:07:52.901: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 17:07:54.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3805" for this suite.
STEP: Destroying namespace "webhook-3805-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.429 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":273,"skipped":4525,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 17:07:55.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Jul  7 17:07:56.498: INFO: Waiting up to 5m0s for pod "pod-2cb18273-1974-4997-af8b-d7dc53a1d462" in namespace "emptydir-443" to be "success or failure"
Jul  7 17:07:56.758: INFO: Pod "pod-2cb18273-1974-4997-af8b-d7dc53a1d462": Phase="Pending", Reason="", readiness=false. Elapsed: 259.828332ms
Jul  7 17:07:58.762: INFO: Pod "pod-2cb18273-1974-4997-af8b-d7dc53a1d462": Phase="Pending", Reason="", readiness=false. Elapsed: 2.264636629s
Jul  7 17:08:00.855: INFO: Pod "pod-2cb18273-1974-4997-af8b-d7dc53a1d462": Phase="Pending", Reason="", readiness=false. Elapsed: 4.357318619s
Jul  7 17:08:02.858: INFO: Pod "pod-2cb18273-1974-4997-af8b-d7dc53a1d462": Phase="Pending", Reason="", readiness=false. Elapsed: 6.360629698s
Jul  7 17:08:04.862: INFO: Pod "pod-2cb18273-1974-4997-af8b-d7dc53a1d462": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.363992748s
STEP: Saw pod success
Jul  7 17:08:04.862: INFO: Pod "pod-2cb18273-1974-4997-af8b-d7dc53a1d462" satisfied condition "success or failure"
Jul  7 17:08:04.864: INFO: Trying to get logs from node jerma-worker2 pod pod-2cb18273-1974-4997-af8b-d7dc53a1d462 container test-container: 
STEP: delete the pod
Jul  7 17:08:04.890: INFO: Waiting for pod pod-2cb18273-1974-4997-af8b-d7dc53a1d462 to disappear
Jul  7 17:08:04.900: INFO: Pod pod-2cb18273-1974-4997-af8b-d7dc53a1d462 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 17:08:04.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-443" for this suite.

• [SLOW TEST:9.798 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":274,"skipped":4543,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 17:08:04.907: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul  7 17:08:09.185: INFO: Waiting up to 5m0s for pod "client-envvars-621a62fa-8d56-4553-b56a-f801889cc7de" in namespace "pods-6078" to be "success or failure"
Jul  7 17:08:09.195: INFO: Pod "client-envvars-621a62fa-8d56-4553-b56a-f801889cc7de": Phase="Pending", Reason="", readiness=false. Elapsed: 9.957774ms
Jul  7 17:08:11.303: INFO: Pod "client-envvars-621a62fa-8d56-4553-b56a-f801889cc7de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1173592s
Jul  7 17:08:13.306: INFO: Pod "client-envvars-621a62fa-8d56-4553-b56a-f801889cc7de": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120440203s
Jul  7 17:08:15.310: INFO: Pod "client-envvars-621a62fa-8d56-4553-b56a-f801889cc7de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.124324828s
STEP: Saw pod success
Jul  7 17:08:15.310: INFO: Pod "client-envvars-621a62fa-8d56-4553-b56a-f801889cc7de" satisfied condition "success or failure"
Jul  7 17:08:15.312: INFO: Trying to get logs from node jerma-worker pod client-envvars-621a62fa-8d56-4553-b56a-f801889cc7de container env3cont: 
STEP: delete the pod
Jul  7 17:08:15.382: INFO: Waiting for pod client-envvars-621a62fa-8d56-4553-b56a-f801889cc7de to disappear
Jul  7 17:08:15.410: INFO: Pod client-envvars-621a62fa-8d56-4553-b56a-f801889cc7de no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 17:08:15.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6078" for this suite.

• [SLOW TEST:10.590 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":275,"skipped":4544,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 17:08:15.497: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Jul  7 17:08:15.663: INFO: Waiting up to 5m0s for pod "downward-api-f414eda6-21ca-4a7c-9625-bf0c482e2041" in namespace "downward-api-9811" to be "success or failure"
Jul  7 17:08:15.667: INFO: Pod "downward-api-f414eda6-21ca-4a7c-9625-bf0c482e2041": Phase="Pending", Reason="", readiness=false. Elapsed: 3.59492ms
Jul  7 17:08:17.860: INFO: Pod "downward-api-f414eda6-21ca-4a7c-9625-bf0c482e2041": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197181136s
Jul  7 17:08:19.864: INFO: Pod "downward-api-f414eda6-21ca-4a7c-9625-bf0c482e2041": Phase="Running", Reason="", readiness=true. Elapsed: 4.200847743s
Jul  7 17:08:22.119: INFO: Pod "downward-api-f414eda6-21ca-4a7c-9625-bf0c482e2041": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.455774484s
STEP: Saw pod success
Jul  7 17:08:22.119: INFO: Pod "downward-api-f414eda6-21ca-4a7c-9625-bf0c482e2041" satisfied condition "success or failure"
Jul  7 17:08:22.123: INFO: Trying to get logs from node jerma-worker pod downward-api-f414eda6-21ca-4a7c-9625-bf0c482e2041 container dapi-container: 
STEP: delete the pod
Jul  7 17:08:22.731: INFO: Waiting for pod downward-api-f414eda6-21ca-4a7c-9625-bf0c482e2041 to disappear
Jul  7 17:08:22.763: INFO: Pod downward-api-f414eda6-21ca-4a7c-9625-bf0c482e2041 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 17:08:22.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9811" for this suite.

• [SLOW TEST:7.279 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":276,"skipped":4548,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul  7 17:08:22.777: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-443
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating stateful set ss in namespace statefulset-443
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-443
Jul  7 17:08:23.070: INFO: Found 0 stateful pods, waiting for 1
Jul  7 17:08:33.075: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jul  7 17:08:33.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul  7 17:08:33.480: INFO: stderr: "I0707 17:08:33.215965    3839 log.go:172] (0xc000b44840) (0xc0008ca000) Create stream\nI0707 17:08:33.216036    3839 log.go:172] (0xc000b44840) (0xc0008ca000) Stream added, broadcasting: 1\nI0707 17:08:33.219266    3839 log.go:172] (0xc000b44840) Reply frame received for 1\nI0707 17:08:33.219313    3839 log.go:172] (0xc000b44840) (0xc0006f5ae0) Create stream\nI0707 17:08:33.219325    3839 log.go:172] (0xc000b44840) (0xc0006f5ae0) Stream added, broadcasting: 3\nI0707 17:08:33.220298    3839 log.go:172] (0xc000b44840) Reply frame received for 3\nI0707 17:08:33.220341    3839 log.go:172] (0xc000b44840) (0xc0008ca0a0) Create stream\nI0707 17:08:33.220365    3839 log.go:172] (0xc000b44840) (0xc0008ca0a0) Stream added, broadcasting: 5\nI0707 17:08:33.221507    3839 log.go:172] (0xc000b44840) Reply frame received for 5\nI0707 17:08:33.282857    3839 log.go:172] (0xc000b44840) Data frame received for 5\nI0707 17:08:33.282877    3839 log.go:172] (0xc0008ca0a0) (5) Data frame handling\nI0707 17:08:33.282884    3839 log.go:172] (0xc0008ca0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0707 17:08:33.472771    3839 log.go:172] (0xc000b44840) Data frame received for 3\nI0707 17:08:33.472824    3839 log.go:172] (0xc0006f5ae0) (3) Data frame handling\nI0707 17:08:33.472874    3839 log.go:172] (0xc0006f5ae0) (3) Data frame sent\nI0707 17:08:33.473064    3839 log.go:172] (0xc000b44840) Data frame received for 5\nI0707 17:08:33.473481    3839 log.go:172] (0xc0008ca0a0) (5) Data frame handling\nI0707 17:08:33.473509    3839 log.go:172] (0xc000b44840) Data frame received for 3\nI0707 17:08:33.473516    3839 log.go:172] (0xc0006f5ae0) (3) Data frame handling\nI0707 17:08:33.475292    3839 log.go:172] (0xc000b44840) Data frame received for 1\nI0707 17:08:33.475324    3839 log.go:172] (0xc0008ca000) (1) Data frame handling\nI0707 17:08:33.475353    3839 log.go:172] (0xc0008ca000) (1) Data frame sent\nI0707 17:08:33.475377    3839 log.go:172] (0xc000b44840) (0xc0008ca000) Stream removed, broadcasting: 1\nI0707 17:08:33.475399    3839 log.go:172] (0xc000b44840) Go away received\nI0707 17:08:33.475924    3839 log.go:172] (0xc000b44840) (0xc0008ca000) Stream removed, broadcasting: 1\nI0707 17:08:33.475962    3839 log.go:172] (0xc000b44840) (0xc0006f5ae0) Stream removed, broadcasting: 3\nI0707 17:08:33.475976    3839 log.go:172] (0xc000b44840) (0xc0008ca0a0) Stream removed, broadcasting: 5\n"
Jul  7 17:08:33.480: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul  7 17:08:33.480: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul  7 17:08:33.485: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jul  7 17:08:43.538: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul  7 17:08:43.538: INFO: Waiting for statefulset status.replicas updated to 0
Jul  7 17:08:43.597: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Jul  7 17:08:43.597: INFO: ss-0  jerma-worker  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:23 +0000 UTC  }]
Jul  7 17:08:43.597: INFO: 
Jul  7 17:08:43.597: INFO: StatefulSet ss has not reached scale 3, at 1
Jul  7 17:08:44.601: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.952309306s
Jul  7 17:08:45.963: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.947845063s
Jul  7 17:08:46.967: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.585981186s
Jul  7 17:08:47.971: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.582410782s
Jul  7 17:08:48.992: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.577974482s
Jul  7 17:08:49.999: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.556544387s
Jul  7 17:08:51.016: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.550395425s
Jul  7 17:08:52.065: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.533115152s
Jul  7 17:08:53.162: INFO: Verifying statefulset ss doesn't scale past 3 for another 483.669856ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-443
Jul  7 17:08:54.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  7 17:08:54.360: INFO: stderr: "I0707 17:08:54.294725    3860 log.go:172] (0xc0000f2b00) (0xc0009ea0a0) Create stream\nI0707 17:08:54.294776    3860 log.go:172] (0xc0000f2b00) (0xc0009ea0a0) Stream added, broadcasting: 1\nI0707 17:08:54.296846    3860 log.go:172] (0xc0000f2b00) Reply frame received for 1\nI0707 17:08:54.296869    3860 log.go:172] (0xc0000f2b00) (0xc0005ee6e0) Create stream\nI0707 17:08:54.296877    3860 log.go:172] (0xc0000f2b00) (0xc0005ee6e0) Stream added, broadcasting: 3\nI0707 17:08:54.297922    3860 log.go:172] (0xc0000f2b00) Reply frame received for 3\nI0707 17:08:54.297948    3860 log.go:172] (0xc0000f2b00) (0xc0009ea140) Create stream\nI0707 17:08:54.297957    3860 log.go:172] (0xc0000f2b00) (0xc0009ea140) Stream added, broadcasting: 5\nI0707 17:08:54.298761    3860 log.go:172] (0xc0000f2b00) Reply frame received for 5\nI0707 17:08:54.351736    3860 log.go:172] (0xc0000f2b00) Data frame received for 5\nI0707 17:08:54.351781    3860 log.go:172] (0xc0009ea140) (5) Data frame handling\nI0707 17:08:54.351797    3860 log.go:172] (0xc0009ea140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0707 17:08:54.351831    3860 log.go:172] (0xc0000f2b00) Data frame received for 3\nI0707 17:08:54.351881    3860 log.go:172] (0xc0005ee6e0) (3) Data frame handling\nI0707 17:08:54.351900    3860 log.go:172] (0xc0005ee6e0) (3) Data frame sent\nI0707 17:08:54.351918    3860 log.go:172] (0xc0000f2b00) Data frame received for 3\nI0707 17:08:54.351929    3860 log.go:172] (0xc0005ee6e0) (3) Data frame handling\nI0707 17:08:54.351954    3860 log.go:172] (0xc0000f2b00) Data frame received for 5\nI0707 17:08:54.351968    3860 log.go:172] (0xc0009ea140) (5) Data frame handling\nI0707 17:08:54.353662    3860 log.go:172] (0xc0000f2b00) Data frame received for 1\nI0707 17:08:54.353739    3860 log.go:172] (0xc0009ea0a0) (1) Data frame handling\nI0707 17:08:54.353802    3860 log.go:172] (0xc0009ea0a0) (1) Data frame sent\nI0707 17:08:54.353824    3860 log.go:172] (0xc0000f2b00) (0xc0009ea0a0) Stream removed, broadcasting: 1\nI0707 17:08:54.353845    3860 log.go:172] (0xc0000f2b00) Go away received\nI0707 17:08:54.354352    3860 log.go:172] (0xc0000f2b00) (0xc0009ea0a0) Stream removed, broadcasting: 1\nI0707 17:08:54.354379    3860 log.go:172] (0xc0000f2b00) (0xc0005ee6e0) Stream removed, broadcasting: 3\nI0707 17:08:54.354397    3860 log.go:172] (0xc0000f2b00) (0xc0009ea140) Stream removed, broadcasting: 5\n"
Jul  7 17:08:54.360: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul  7 17:08:54.360: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul  7 17:08:54.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  7 17:08:54.652: INFO: stderr: "I0707 17:08:54.586089    3883 log.go:172] (0xc0005728f0) (0xc0009ba140) Create stream\nI0707 17:08:54.586152    3883 log.go:172] (0xc0005728f0) (0xc0009ba140) Stream added, broadcasting: 1\nI0707 17:08:54.589011    3883 log.go:172] (0xc0005728f0) Reply frame received for 1\nI0707 17:08:54.589084    3883 log.go:172] (0xc0005728f0) (0xc00062fa40) Create stream\nI0707 17:08:54.589097    3883 log.go:172] (0xc0005728f0) (0xc00062fa40) Stream added, broadcasting: 3\nI0707 17:08:54.590353    3883 log.go:172] (0xc0005728f0) Reply frame received for 3\nI0707 17:08:54.590389    3883 log.go:172] (0xc0005728f0) (0xc0002a1400) Create stream\nI0707 17:08:54.590397    3883 log.go:172] (0xc0005728f0) (0xc0002a1400) Stream added, broadcasting: 5\nI0707 17:08:54.591352    3883 log.go:172] (0xc0005728f0) Reply frame received for 5\nI0707 17:08:54.644504    3883 log.go:172] (0xc0005728f0) Data frame received for 3\nI0707 17:08:54.644556    3883 log.go:172] (0xc00062fa40) (3) Data frame handling\nI0707 17:08:54.644578    3883 log.go:172] (0xc00062fa40) (3) Data frame sent\nI0707 17:08:54.644596    3883 log.go:172] (0xc0005728f0) Data frame received for 3\nI0707 17:08:54.644604    3883 log.go:172] (0xc00062fa40) (3) Data frame handling\nI0707 17:08:54.644654    3883 log.go:172] (0xc0005728f0) Data frame received for 5\nI0707 17:08:54.644685    3883 log.go:172] (0xc0002a1400) (5) Data frame handling\nI0707 17:08:54.644711    3883 log.go:172] (0xc0002a1400) (5) Data frame sent\nI0707 17:08:54.644724    3883 log.go:172] (0xc0005728f0) Data frame received for 5\nI0707 17:08:54.644732    3883 log.go:172] (0xc0002a1400) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0707 17:08:54.646565    3883 log.go:172] (0xc0005728f0) Data frame received for 1\nI0707 17:08:54.646641    3883 log.go:172] (0xc0009ba140) (1) Data frame handling\nI0707 17:08:54.646711    3883 log.go:172] (0xc0009ba140) (1) Data frame sent\nI0707 17:08:54.646741    3883 log.go:172] (0xc0005728f0) (0xc0009ba140) Stream removed, broadcasting: 1\nI0707 17:08:54.646770    3883 log.go:172] (0xc0005728f0) Go away received\nI0707 17:08:54.647275    3883 log.go:172] (0xc0005728f0) (0xc0009ba140) Stream removed, broadcasting: 1\nI0707 17:08:54.647347    3883 log.go:172] (0xc0005728f0) (0xc00062fa40) Stream removed, broadcasting: 3\nI0707 17:08:54.647366    3883 log.go:172] (0xc0005728f0) (0xc0002a1400) Stream removed, broadcasting: 5\n"
Jul  7 17:08:54.652: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul  7 17:08:54.652: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul  7 17:08:54.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  7 17:08:54.882: INFO: stderr: "I0707 17:08:54.788378    3904 log.go:172] (0xc00078aa50) (0xc0007980a0) Create stream\nI0707 17:08:54.788446    3904 log.go:172] (0xc00078aa50) (0xc0007980a0) Stream added, broadcasting: 1\nI0707 17:08:54.791488    3904 log.go:172] (0xc00078aa50) Reply frame received for 1\nI0707 17:08:54.791534    3904 log.go:172] (0xc00078aa50) (0xc0006afae0) Create stream\nI0707 17:08:54.791551    3904 log.go:172] (0xc00078aa50) (0xc0006afae0) Stream added, broadcasting: 3\nI0707 17:08:54.792602    3904 log.go:172] (0xc00078aa50) Reply frame received for 3\nI0707 17:08:54.792654    3904 log.go:172] (0xc00078aa50) (0xc000294000) Create stream\nI0707 17:08:54.792667    3904 log.go:172] (0xc00078aa50) (0xc000294000) Stream added, broadcasting: 5\nI0707 17:08:54.794163    3904 log.go:172] (0xc00078aa50) Reply frame received for 5\nI0707 17:08:54.874804    3904 log.go:172] (0xc00078aa50) Data frame received for 3\nI0707 17:08:54.874840    3904 log.go:172] (0xc0006afae0) (3) Data frame handling\nI0707 17:08:54.874849    3904 log.go:172] (0xc0006afae0) (3) Data frame sent\nI0707 17:08:54.874870    3904 log.go:172] (0xc00078aa50) Data frame received for 5\nI0707 17:08:54.874901    3904 log.go:172] (0xc000294000) (5) Data frame handling\nI0707 17:08:54.874913    3904 log.go:172] (0xc000294000) (5) Data frame sent\nI0707 17:08:54.874921    3904 log.go:172] (0xc00078aa50) Data frame received for 5\nI0707 17:08:54.874928    3904 log.go:172] (0xc000294000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0707 17:08:54.874947    3904 log.go:172] (0xc00078aa50) Data frame received for 3\nI0707 17:08:54.874954    3904 log.go:172] (0xc0006afae0) (3) Data frame handling\nI0707 17:08:54.876471    3904 log.go:172] (0xc00078aa50) Data frame received for 1\nI0707 17:08:54.876499    3904 log.go:172] (0xc0007980a0) (1) Data frame handling\nI0707 17:08:54.876522    3904 log.go:172] (0xc0007980a0) (1) Data frame sent\nI0707 17:08:54.876544    3904 log.go:172] (0xc00078aa50) (0xc0007980a0) Stream removed, broadcasting: 1\nI0707 17:08:54.876566    3904 log.go:172] (0xc00078aa50) Go away received\nI0707 17:08:54.876891    3904 log.go:172] (0xc00078aa50) (0xc0007980a0) Stream removed, broadcasting: 1\nI0707 17:08:54.876909    3904 log.go:172] (0xc00078aa50) (0xc0006afae0) Stream removed, broadcasting: 3\nI0707 17:08:54.876918    3904 log.go:172] (0xc00078aa50) (0xc000294000) Stream removed, broadcasting: 5\n"
Jul  7 17:08:54.882: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul  7 17:08:54.882: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul  7 17:08:54.994: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Jul  7 17:09:04.998: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jul  7 17:09:04.998: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jul  7 17:09:04.998: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jul  7 17:09:05.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul  7 17:09:05.230: INFO: stderr: "I0707 17:09:05.138996    3925 log.go:172] (0xc00012e2c0) (0xc000743a40) Create stream\nI0707 17:09:05.139072    3925 log.go:172] (0xc00012e2c0) (0xc000743a40) Stream added, broadcasting: 1\nI0707 17:09:05.141814    3925 log.go:172] (0xc00012e2c0) Reply frame received for 1\nI0707 17:09:05.142019    3925 log.go:172] (0xc00012e2c0) (0xc000743c20) Create stream\nI0707 17:09:05.142038    3925 log.go:172] (0xc00012e2c0) (0xc000743c20) Stream added, broadcasting: 3\nI0707 17:09:05.142999    3925 log.go:172] (0xc00012e2c0) Reply frame received for 3\nI0707 17:09:05.143031    3925 log.go:172] (0xc00012e2c0) (0xc000976000) Create stream\nI0707 17:09:05.143042    3925 log.go:172] (0xc00012e2c0) (0xc000976000) Stream added, broadcasting: 5\nI0707 17:09:05.144041    3925 log.go:172] (0xc00012e2c0) Reply frame received for 5\nI0707 17:09:05.223163    3925 log.go:172] (0xc00012e2c0) Data frame received for 3\nI0707 17:09:05.223193    3925 log.go:172] (0xc000743c20) (3) Data frame handling\nI0707 17:09:05.223217    3925 log.go:172] (0xc000743c20) (3) Data frame sent\nI0707 17:09:05.223228    3925 log.go:172] (0xc00012e2c0) Data frame received for 3\nI0707 17:09:05.223238    3925 log.go:172] (0xc000743c20) (3) Data frame handling\nI0707 17:09:05.223271    3925 log.go:172] (0xc00012e2c0) Data frame received for 5\nI0707 17:09:05.223294    3925 log.go:172] (0xc000976000) (5) Data frame handling\nI0707 17:09:05.223322    3925 log.go:172] (0xc000976000) (5) Data frame sent\nI0707 17:09:05.223336    3925 log.go:172] (0xc00012e2c0) Data frame received for 5\nI0707 17:09:05.223346    3925 log.go:172] (0xc000976000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0707 17:09:05.224856    3925 log.go:172] (0xc00012e2c0) Data frame received for 1\nI0707 17:09:05.224905    3925 log.go:172] (0xc000743a40) (1) Data frame handling\nI0707 17:09:05.224930    3925 log.go:172] (0xc000743a40) (1) Data frame sent\nI0707 17:09:05.224952    3925 log.go:172] (0xc00012e2c0) (0xc000743a40) Stream removed, broadcasting: 1\nI0707 17:09:05.224986    3925 log.go:172] (0xc00012e2c0) Go away received\nI0707 17:09:05.225795    3925 log.go:172] (0xc00012e2c0) (0xc000743a40) Stream removed, broadcasting: 1\nI0707 17:09:05.225818    3925 log.go:172] (0xc00012e2c0) (0xc000743c20) Stream removed, broadcasting: 3\nI0707 17:09:05.225831    3925 log.go:172] (0xc00012e2c0) (0xc000976000) Stream removed, broadcasting: 5\n"
Jul  7 17:09:05.231: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul  7 17:09:05.231: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul  7 17:09:05.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul  7 17:09:05.551: INFO: stderr: "I0707 17:09:05.351886    3946 log.go:172] (0xc00071aa50) (0xc0006e4000) Create stream\nI0707 17:09:05.351947    3946 log.go:172] (0xc00071aa50) (0xc0006e4000) Stream added, broadcasting: 1\nI0707 17:09:05.354516    3946 log.go:172] (0xc00071aa50) Reply frame received for 1\nI0707 17:09:05.354560    3946 log.go:172] (0xc00071aa50) (0xc0005c1b80) Create stream\nI0707 17:09:05.354577    3946 log.go:172] (0xc00071aa50) (0xc0005c1b80) Stream added, broadcasting: 3\nI0707 17:09:05.355340    3946 log.go:172] (0xc00071aa50) Reply frame received for 3\nI0707 17:09:05.355362    3946 log.go:172] (0xc00071aa50) (0xc0006e4140) Create stream\nI0707 17:09:05.355370    3946 log.go:172] (0xc00071aa50) (0xc0006e4140) Stream added, broadcasting: 5\nI0707 17:09:05.356053    3946 log.go:172] (0xc00071aa50) Reply frame received for 5\nI0707 17:09:05.412859    3946 log.go:172] (0xc00071aa50) Data frame received for 5\nI0707 17:09:05.412889    3946 log.go:172] (0xc0006e4140) (5) Data frame handling\nI0707 17:09:05.412910    3946 log.go:172] (0xc0006e4140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0707 17:09:05.542295    3946 log.go:172] (0xc00071aa50) Data frame received for 3\nI0707 17:09:05.542330    3946 log.go:172] (0xc0005c1b80) (3) Data frame handling\nI0707 17:09:05.542354    3946 log.go:172] (0xc0005c1b80) (3) Data frame sent\nI0707 17:09:05.542365    3946 log.go:172] (0xc00071aa50) Data frame received for 3\nI0707 17:09:05.542373    3946 log.go:172] (0xc0005c1b80) (3) Data frame handling\nI0707 17:09:05.542754    3946 log.go:172] (0xc00071aa50) Data frame received for 5\nI0707 17:09:05.542788    3946 log.go:172] (0xc0006e4140) (5) Data frame handling\nI0707 17:09:05.544587    3946 log.go:172] (0xc00071aa50) Data frame received for 1\nI0707 17:09:05.544623    3946 log.go:172] (0xc0006e4000) (1) Data frame handling\nI0707 17:09:05.544662    3946 log.go:172] (0xc0006e4000) (1) Data frame sent\nI0707 17:09:05.544687    3946 log.go:172] (0xc00071aa50) (0xc0006e4000) Stream removed, broadcasting: 1\nI0707 17:09:05.544708    3946 log.go:172] (0xc00071aa50) Go away received\nI0707 17:09:05.545365    3946 log.go:172] (0xc00071aa50) (0xc0006e4000) Stream removed, broadcasting: 1\nI0707 17:09:05.545388    3946 log.go:172] (0xc00071aa50) (0xc0005c1b80) Stream removed, broadcasting: 3\nI0707 17:09:05.545399    3946 log.go:172] (0xc00071aa50) (0xc0006e4140) Stream removed, broadcasting: 5\n"
Jul  7 17:09:05.551: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul  7 17:09:05.551: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul  7 17:09:05.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul  7 17:09:05.857: INFO: stderr: "I0707 17:09:05.739122    3969 log.go:172] (0xc000982000) (0xc000780780) Create stream\nI0707 17:09:05.739197    3969 log.go:172] (0xc000982000) (0xc000780780) Stream added, broadcasting: 1\nI0707 17:09:05.743425    3969 log.go:172] (0xc000982000) Reply frame received for 1\nI0707 17:09:05.743474    3969 log.go:172] (0xc000982000) (0xc000a3c000) Create stream\nI0707 17:09:05.743485    3969 log.go:172] (0xc000982000) (0xc000a3c000) Stream added, broadcasting: 3\nI0707 17:09:05.744363    3969 log.go:172] (0xc000982000) Reply frame received for 3\nI0707 17:09:05.744413    3969 log.go:172] (0xc000982000) (0xc000914000) Create stream\nI0707 17:09:05.744444    3969 log.go:172] (0xc000982000) (0xc000914000) Stream added, broadcasting: 5\nI0707 17:09:05.745571    3969 log.go:172] (0xc000982000) Reply frame received for 5\nI0707 17:09:05.814933    3969 log.go:172] (0xc000982000) Data frame received for 5\nI0707 17:09:05.814977    3969 log.go:172] (0xc000914000) (5) Data frame handling\nI0707 17:09:05.815007    3969 log.go:172] (0xc000914000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0707 17:09:05.848160    3969 log.go:172] (0xc000982000) Data frame received for 3\nI0707 17:09:05.848265    3969 log.go:172] (0xc000a3c000) (3) Data frame handling\nI0707 17:09:05.848329    3969 log.go:172] (0xc000a3c000) (3) Data frame sent\nI0707 17:09:05.848600    3969 log.go:172] (0xc000982000) Data frame received for 3\nI0707 17:09:05.848627    3969 log.go:172] (0xc000a3c000) (3) Data frame handling\nI0707 17:09:05.848796    3969 log.go:172] (0xc000982000) Data frame received for 5\nI0707 17:09:05.848828    3969 log.go:172] (0xc000914000) (5) Data frame handling\nI0707 17:09:05.851078    3969 log.go:172] (0xc000982000) Data frame received for 1\nI0707 17:09:05.851106    3969 log.go:172] (0xc000780780) (1) Data frame handling\nI0707 17:09:05.851121    3969 log.go:172] (0xc000780780) (1) Data frame sent\nI0707 17:09:05.851634    3969 log.go:172] (0xc000982000) (0xc000780780) Stream removed, broadcasting: 1\nI0707 17:09:05.851700    3969 log.go:172] (0xc000982000) Go away received\nI0707 17:09:05.852092    3969 log.go:172] (0xc000982000) (0xc000780780) Stream removed, broadcasting: 1\nI0707 17:09:05.852118    3969 log.go:172] (0xc000982000) (0xc000a3c000) Stream removed, broadcasting: 3\nI0707 17:09:05.852131    3969 log.go:172] (0xc000982000) (0xc000914000) Stream removed, broadcasting: 5\n"
Jul  7 17:09:05.858: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul  7 17:09:05.858: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul  7 17:09:05.858: INFO: Waiting for statefulset status.replicas updated to 0
Jul  7 17:09:05.875: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jul  7 17:09:15.998: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul  7 17:09:15.998: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jul  7 17:09:15.998: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jul  7 17:09:16.041: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Jul  7 17:09:16.041: INFO: ss-0  jerma-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:23 +0000 UTC  }]
Jul  7 17:09:16.041: INFO: ss-1  jerma-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:43 +0000 UTC  }]
Jul  7 17:09:16.041: INFO: ss-2  jerma-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:43 +0000 UTC  }]
Jul  7 17:09:16.041: INFO: 
Jul  7 17:09:16.041: INFO: StatefulSet ss has not reached scale 0, at 3
Jul  7 17:09:17.244: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Jul  7 17:09:17.244: INFO: ss-0  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:23 +0000 UTC  }]
Jul  7 17:09:17.244: INFO: ss-1  jerma-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:43 +0000 UTC  }]
Jul  7 17:09:17.245: INFO: ss-2  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:43 +0000 UTC  }]
Jul  7 17:09:17.245: INFO: 
Jul  7 17:09:17.245: INFO: StatefulSet ss has not reached scale 0, at 3
Jul  7 17:09:18.250: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Jul  7 17:09:18.250: INFO: ss-0  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:23 +0000 UTC  }]
Jul  7 17:09:18.250: INFO: ss-1  jerma-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:43 +0000 UTC  }]
Jul  7 17:09:18.250: INFO: ss-2  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:43 +0000 UTC  }]
Jul  7 17:09:18.250: INFO: 
Jul  7 17:09:18.250: INFO: StatefulSet ss has not reached scale 0, at 3
Jul  7 17:09:19.318: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Jul  7 17:09:19.318: INFO: ss-0  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:23 +0000 UTC  }]
Jul  7 17:09:19.318: INFO: ss-1  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:43 +0000 UTC  }]
Jul  7 17:09:19.318: INFO: ss-2  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:43 +0000 UTC  }]
Jul  7 17:09:19.318: INFO: 
Jul  7 17:09:19.318: INFO: StatefulSet ss has not reached scale 0, at 3
Jul  7 17:09:20.322: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Jul  7 17:09:20.322: INFO: ss-0  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:23 +0000 UTC  }]
Jul  7 17:09:20.322: INFO: ss-1  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:43 +0000 UTC  }]
Jul  7 17:09:20.322: INFO: ss-2  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:43 +0000 UTC  }]
Jul  7 17:09:20.322: INFO: 
Jul  7 17:09:20.322: INFO: StatefulSet ss has not reached scale 0, at 3
Jul  7 17:09:21.422: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Jul  7 17:09:21.422: INFO: ss-0  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:23 +0000 UTC  }]
Jul  7 17:09:21.422: INFO: ss-1  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:43 +0000 UTC  }]
Jul  7 17:09:21.422: INFO: ss-2  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:43 +0000 UTC  }]
Jul  7 17:09:21.422: INFO: 
Jul  7 17:09:21.422: INFO: StatefulSet ss has not reached scale 0, at 3
Jul  7 17:09:22.426: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Jul  7 17:09:22.426: INFO: ss-0  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:23 +0000 UTC  }]
Jul  7 17:09:22.426: INFO: ss-1  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:43 +0000 UTC  }]
Jul  7 17:09:22.426: INFO: ss-2  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:43 +0000 UTC  }]
Jul  7 17:09:22.426: INFO: 
Jul  7 17:09:22.426: INFO: StatefulSet ss has not reached scale 0, at 3
Jul  7 17:09:23.430: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Jul  7 17:09:23.430: INFO: ss-0  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:23 +0000 UTC  }]
Jul  7 17:09:23.430: INFO: ss-1  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:43 +0000 UTC  }]
Jul  7 17:09:23.430: INFO: ss-2  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:43 +0000 UTC  }]
Jul  7 17:09:23.430: INFO: 
Jul  7 17:09:23.430: INFO: StatefulSet ss has not reached scale 0, at 3
Jul  7 17:09:24.732: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Jul  7 17:09:24.732: INFO: ss-0  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:23 +0000 UTC  }]
Jul  7 17:09:24.732: INFO: ss-1  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:43 +0000 UTC  }]
Jul  7 17:09:24.732: INFO: ss-2  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:43 +0000 UTC  }]
Jul  7 17:09:24.732: INFO: 
Jul  7 17:09:24.732: INFO: StatefulSet ss has not reached scale 0, at 3
Jul  7 17:09:25.808: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Jul  7 17:09:25.808: INFO: ss-0  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:23 +0000 UTC  }]
Jul  7 17:09:25.808: INFO: ss-1  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:43 +0000 UTC  }]
Jul  7 17:09:25.808: INFO: ss-2  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:09:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 17:08:43 +0000 UTC  }]
Jul  7 17:09:25.808: INFO: 
Jul  7 17:09:25.808: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-443
Jul  7 17:09:27.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  7 17:09:27.958: INFO: rc: 1
Jul  7 17:09:27.958: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: pod does not exist

error:
exit status 1
Jul  7 17:09:37.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  7 17:09:38.056: INFO: rc: 1
Jul  7 17:09:38.056: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  7 17:09:48.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  7 17:09:48.202: INFO: rc: 1
Jul  7 17:09:48.202: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  7 17:09:58.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  7 17:09:58.306: INFO: rc: 1
Jul  7 17:09:58.306: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  7 17:10:08.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  7 17:10:08.403: INFO: rc: 1
Jul  7 17:10:08.403: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  7 17:10:18.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  7 17:10:18.501: INFO: rc: 1
Jul  7 17:10:18.501: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  7 17:10:28.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  7 17:10:28.719: INFO: rc: 1
Jul  7 17:10:28.719: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  7 17:10:38.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  7 17:10:39.166: INFO: rc: 1
Jul  7 17:10:39.167: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  7 17:10:49.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  7 17:10:49.661: INFO: rc: 1
Jul  7 17:10:49.661: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  7 17:10:59.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  7 17:10:59.760: INFO: rc: 1
Jul  7 17:10:59.761: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  7 17:11:09.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  7 17:11:09.871: INFO: rc: 1
Jul  7 17:11:09.871: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  7 17:11:19.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  7 17:11:20.143: INFO: rc: 1
Jul  7 17:11:20.143: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  7 17:11:30.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  7 17:11:30.238: INFO: rc: 1
Jul  7 17:11:30.238: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  7 17:11:40.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  7 17:11:40.494: INFO: rc: 1
Jul  7 17:11:40.494: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  7 17:11:50.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  7 17:11:50.774: INFO: rc: 1
Jul  7 17:11:50.774: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  7 17:12:00.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  7 17:12:00.880: INFO: rc: 1
Jul  7 17:12:00.880: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  7 17:12:10.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  7 17:12:11.264: INFO: rc: 1
Jul  7 17:12:11.264: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  7 17:12:21.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  7 17:12:22.168: INFO: rc: 1
Jul  7 17:12:22.168: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  7 17:12:32.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  7 17:12:33.533: INFO: rc: 1
Jul  7 17:12:33.533: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  7 17:12:43.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  7 17:12:43.732: INFO: rc: 1
Jul  7 17:12:43.732: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  7 17:12:53.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  7 17:12:54.062: INFO: rc: 1
Jul  7 17:12:54.062: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  7 17:13:04.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  7 17:13:04.162: INFO: rc: 1
Jul  7 17:13:04.163: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  7 17:13:14.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  7 17:13:14.296: INFO: rc: 1
Jul  7 17:13:14.296: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  7 17:13:24.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  7 17:13:25.111: INFO: rc: 1
Jul  7 17:13:25.111: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  7 17:13:35.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  7 17:13:35.674: INFO: rc: 1
Jul  7 17:13:35.674: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  7 17:13:45.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  7 17:13:45.778: INFO: rc: 1
Jul  7 17:13:45.778: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  7 17:13:55.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  7 17:13:56.109: INFO: rc: 1
Jul  7 17:13:56.109: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  7 17:14:06.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  7 17:14:06.198: INFO: rc: 1
Jul  7 17:14:06.198: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  7 17:14:16.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  7 17:14:16.296: INFO: rc: 1
Jul  7 17:14:16.296: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  7 17:14:26.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  7 17:14:26.390: INFO: rc: 1
Jul  7 17:14:26.390: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul  7 17:14:36.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-443 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul  7 17:14:36.693: INFO: rc: 1
Jul  7 17:14:36.693: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: 
Jul  7 17:14:36.693: INFO: Scaling statefulset ss to 0
Jul  7 17:14:36.701: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jul  7 17:14:36.703: INFO: Deleting all statefulset in ns statefulset-443
Jul  7 17:14:36.705: INFO: Scaling statefulset ss to 0
Jul  7 17:14:36.711: INFO: Waiting for statefulset status.replicas updated to 0
Jul  7 17:14:36.713: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul  7 17:14:36.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-443" for this suite.

• [SLOW TEST:374.775 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":277,"skipped":4552,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSJul  7 17:14:37.552: INFO: Running AfterSuite actions on all nodes
Jul  7 17:14:37.552: INFO: Running AfterSuite actions on node 1
Jul  7 17:14:37.552: INFO: Skipping dumping logs from cluster
{"msg":"Test Suite completed","total":278,"completed":277,"skipped":4565,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}


Summarizing 1 Failure:

[Fail] [sig-cli] Kubectl client Kubectl logs [It] should be able to retrieve and filter logs  [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1398

Ran 278 of 4843 Specs in 6482.385 seconds
FAIL! -- 277 Passed | 1 Failed | 0 Pending | 4565 Skipped
--- FAIL: TestE2E (6482.48s)
FAIL