I0416 23:37:45.185174 7 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0416 23:37:45.185426 7 e2e.go:124] Starting e2e run "100d5a7b-98b4-4a9b-9804-b0c331afa0ed" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1587080264 - Will randomize all specs Will run 275 of 4992 specs Apr 16 23:37:45.244: INFO: >>> kubeConfig: /root/.kube/config Apr 16 23:37:45.249: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 16 23:37:45.271: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 16 23:37:45.301: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 16 23:37:45.301: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 16 23:37:45.301: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 16 23:37:45.317: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 16 23:37:45.317: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 16 23:37:45.317: INFO: e2e test version: v1.19.0-alpha.0.779+84dc7046797aad Apr 16 23:37:45.318: INFO: kube-apiserver version: v1.17.0 Apr 16 23:37:45.318: INFO: >>> kubeConfig: /root/.kube/config Apr 16 23:37:45.323: INFO: Cluster IP family: ipv4 SSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:37:45.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services Apr 16 23:37:45.364: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:37:45.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7953" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":1,"skipped":6,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:37:45.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-6d5b4634-4706-40c1-87b8-a40a24b28102 STEP: Creating a pod to test consume configMaps Apr 16 23:37:45.463: INFO: Waiting up to 5m0s for pod "pod-configmaps-6ad6d2a1-2ff4-479b-987f-239c8de5cb88" in namespace "configmap-4669" to be "Succeeded or Failed" Apr 16 23:37:45.478: INFO: Pod "pod-configmaps-6ad6d2a1-2ff4-479b-987f-239c8de5cb88": Phase="Pending", Reason="", readiness=false. Elapsed: 14.700693ms Apr 16 23:37:47.481: INFO: Pod "pod-configmaps-6ad6d2a1-2ff4-479b-987f-239c8de5cb88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018124557s Apr 16 23:37:49.486: INFO: Pod "pod-configmaps-6ad6d2a1-2ff4-479b-987f-239c8de5cb88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022775109s STEP: Saw pod success Apr 16 23:37:49.486: INFO: Pod "pod-configmaps-6ad6d2a1-2ff4-479b-987f-239c8de5cb88" satisfied condition "Succeeded or Failed" Apr 16 23:37:49.491: INFO: Trying to get logs from node latest-worker pod pod-configmaps-6ad6d2a1-2ff4-479b-987f-239c8de5cb88 container configmap-volume-test: STEP: delete the pod Apr 16 23:37:49.515: INFO: Waiting for pod pod-configmaps-6ad6d2a1-2ff4-479b-987f-239c8de5cb88 to disappear Apr 16 23:37:49.520: INFO: Pod pod-configmaps-6ad6d2a1-2ff4-479b-987f-239c8de5cb88 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:37:49.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4669" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":2,"skipped":37,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:37:49.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 16 23:37:49.600: INFO: Waiting up to 5m0s for pod "pod-41a7fab8-8815-4479-a83b-b8244ff563dc" in namespace "emptydir-9567" to be "Succeeded or Failed" Apr 16 23:37:49.616: INFO: Pod "pod-41a7fab8-8815-4479-a83b-b8244ff563dc": Phase="Pending", Reason="", readiness=false. Elapsed: 16.400518ms Apr 16 23:37:51.627: INFO: Pod "pod-41a7fab8-8815-4479-a83b-b8244ff563dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027234573s Apr 16 23:37:53.632: INFO: Pod "pod-41a7fab8-8815-4479-a83b-b8244ff563dc": Phase="Running", Reason="", readiness=true. Elapsed: 4.031710056s Apr 16 23:37:55.636: INFO: Pod "pod-41a7fab8-8815-4479-a83b-b8244ff563dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.035741796s STEP: Saw pod success Apr 16 23:37:55.636: INFO: Pod "pod-41a7fab8-8815-4479-a83b-b8244ff563dc" satisfied condition "Succeeded or Failed" Apr 16 23:37:55.639: INFO: Trying to get logs from node latest-worker2 pod pod-41a7fab8-8815-4479-a83b-b8244ff563dc container test-container: STEP: delete the pod Apr 16 23:37:55.684: INFO: Waiting for pod pod-41a7fab8-8815-4479-a83b-b8244ff563dc to disappear Apr 16 23:37:55.701: INFO: Pod pod-41a7fab8-8815-4479-a83b-b8244ff563dc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:37:55.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9567" for this suite. • [SLOW TEST:6.214 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":3,"skipped":90,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:37:55.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:37:55.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3175" for this suite. STEP: Destroying namespace "nspatchtest-52b60c4b-1f62-41b1-91fa-7953f102f5d9-3646" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":4,"skipped":97,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:37:55.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Apr 16 23:38:04.242: INFO: 10 pods remaining Apr 16 23:38:04.242: INFO: 0 pods has nil DeletionTimestamp Apr 16 23:38:04.242: INFO: Apr 16 23:38:05.084: INFO: 0 pods remaining Apr 16 23:38:05.084: INFO: 0 pods has nil DeletionTimestamp Apr 16 23:38:05.084: INFO: STEP: Gathering metrics W0416 23:38:05.720273 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 16 23:38:05.720: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:38:05.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2517" for this suite. • [SLOW TEST:9.977 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":5,"skipped":114,"failed":0} SSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:38:05.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 16 23:38:10.848: INFO: Successfully updated pod "pod-update-activedeadlineseconds-7fb556c8-414f-49d4-95a9-7220df9cb986" Apr 16 23:38:10.848: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-7fb556c8-414f-49d4-95a9-7220df9cb986" in namespace "pods-292" to be "terminated due to deadline exceeded" Apr 16 23:38:10.851: INFO: Pod "pod-update-activedeadlineseconds-7fb556c8-414f-49d4-95a9-7220df9cb986": Phase="Running", Reason="", readiness=true. Elapsed: 3.175235ms Apr 16 23:38:12.855: INFO: Pod "pod-update-activedeadlineseconds-7fb556c8-414f-49d4-95a9-7220df9cb986": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.007306077s Apr 16 23:38:12.855: INFO: Pod "pod-update-activedeadlineseconds-7fb556c8-414f-49d4-95a9-7220df9cb986" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:38:12.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-292" for this suite. • [SLOW TEST:6.952 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":6,"skipped":120,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:38:12.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 16 23:38:12.984: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aeed8b21-6e65-42e2-b604-8fa928950216" in namespace "projected-7486" to be "Succeeded or Failed" Apr 16 23:38:12.989: INFO: Pod "downwardapi-volume-aeed8b21-6e65-42e2-b604-8fa928950216": Phase="Pending", Reason="", readiness=false. Elapsed: 5.681143ms Apr 16 23:38:14.994: INFO: Pod "downwardapi-volume-aeed8b21-6e65-42e2-b604-8fa928950216": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009810932s Apr 16 23:38:16.998: INFO: Pod "downwardapi-volume-aeed8b21-6e65-42e2-b604-8fa928950216": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014187996s STEP: Saw pod success Apr 16 23:38:16.998: INFO: Pod "downwardapi-volume-aeed8b21-6e65-42e2-b604-8fa928950216" satisfied condition "Succeeded or Failed" Apr 16 23:38:17.001: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-aeed8b21-6e65-42e2-b604-8fa928950216 container client-container: STEP: delete the pod Apr 16 23:38:17.049: INFO: Waiting for pod downwardapi-volume-aeed8b21-6e65-42e2-b604-8fa928950216 to disappear Apr 16 23:38:17.088: INFO: Pod downwardapi-volume-aeed8b21-6e65-42e2-b604-8fa928950216 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:38:17.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7486" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":7,"skipped":121,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:38:17.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token STEP: reading a file in the container Apr 16 23:38:21.689: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6324 pod-service-account-81cabeff-1c1f-4e99-8ceb-73db21051839 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Apr 16 23:38:24.070: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6324 pod-service-account-81cabeff-1c1f-4e99-8ceb-73db21051839 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Apr 16 23:38:24.274: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6324 pod-service-account-81cabeff-1c1f-4e99-8ceb-73db21051839 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:38:24.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6324" for this suite. • [SLOW TEST:7.413 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":275,"completed":8,"skipped":161,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:38:24.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-9343 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating stateful set ss in namespace statefulset-9343 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9343 Apr 16 23:38:24.597: INFO: Found 0 stateful pods, waiting for 1 Apr 16 23:38:34.601: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Apr 16 23:38:34.604: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9343 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 16 23:38:34.905: INFO: stderr: "I0416 23:38:34.747280 105 log.go:172] (0xc00068cb00) (0xc0006883c0) Create stream\nI0416 23:38:34.747332 105 log.go:172] (0xc00068cb00) (0xc0006883c0) Stream added, broadcasting: 1\nI0416 23:38:34.750199 105 log.go:172] (0xc00068cb00) Reply frame received for 1\nI0416 23:38:34.750258 105 log.go:172] (0xc00068cb00) (0xc0006e12c0) Create stream\nI0416 23:38:34.750278 105 log.go:172] (0xc00068cb00) (0xc0006e12c0) Stream added, broadcasting: 3\nI0416 23:38:34.751354 105 log.go:172] (0xc00068cb00) Reply frame received for 3\nI0416 23:38:34.751392 105 log.go:172] (0xc00068cb00) (0xc000688460) Create stream\nI0416 23:38:34.751403 105 log.go:172] (0xc00068cb00) (0xc000688460) Stream added, broadcasting: 5\nI0416 23:38:34.752533 105 log.go:172] (0xc00068cb00) Reply frame received for 5\nI0416 23:38:34.869686 105 log.go:172] (0xc00068cb00) Data frame received for 5\nI0416 23:38:34.869709 105 log.go:172] (0xc000688460) (5) Data frame handling\nI0416 23:38:34.869721 105 log.go:172] (0xc000688460) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0416 23:38:34.898227 105 log.go:172] (0xc00068cb00) Data frame received for 5\nI0416 23:38:34.898270 105 log.go:172] (0xc00068cb00) Data frame received for 3\nI0416 23:38:34.898317 105 log.go:172] (0xc0006e12c0) (3) Data frame handling\nI0416 23:38:34.898357 105 log.go:172] (0xc0006e12c0) (3) Data frame sent\nI0416 23:38:34.898389 105 log.go:172] (0xc000688460) (5) Data frame handling\nI0416 23:38:34.898446 105 log.go:172] (0xc00068cb00) Data frame received for 3\nI0416 23:38:34.898481 105 log.go:172] (0xc0006e12c0) (3) Data frame handling\nI0416 23:38:34.901329 105 log.go:172] (0xc00068cb00) Data frame received for 1\nI0416 23:38:34.901398 105 log.go:172] (0xc0006883c0) (1) Data frame handling\nI0416 23:38:34.901419 105 log.go:172] (0xc0006883c0) (1) Data frame sent\nI0416 23:38:34.901443 105 log.go:172] (0xc00068cb00) (0xc0006883c0) Stream removed, broadcasting: 1\nI0416 23:38:34.901463 105 log.go:172] (0xc00068cb00) Go away received\nI0416 23:38:34.901791 105 log.go:172] (0xc00068cb00) (0xc0006883c0) Stream removed, broadcasting: 1\nI0416 23:38:34.901815 105 log.go:172] (0xc00068cb00) (0xc0006e12c0) Stream removed, broadcasting: 3\nI0416 23:38:34.901831 105 log.go:172] (0xc00068cb00) (0xc000688460) Stream removed, broadcasting: 5\n" Apr 16 23:38:34.906: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 16 23:38:34.906: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 16 23:38:34.909: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 16 23:38:44.914: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 16 23:38:44.914: INFO: Waiting for statefulset status.replicas updated to 0 Apr 16 23:38:44.942: INFO: POD NODE PHASE GRACE CONDITIONS Apr 16 23:38:44.942: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:24 +0000 UTC }] Apr 16 23:38:44.942: INFO: Apr 16 23:38:44.942: INFO: StatefulSet ss has not reached scale 3, at 1 Apr 16 23:38:45.947: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.982574742s Apr 16 23:38:47.048: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.977960938s Apr 16 23:38:48.052: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.876482904s Apr 16 23:38:49.695: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.872409537s Apr 16 23:38:50.700: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.229510562s Apr 16 23:38:51.706: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.22432842s Apr 16 23:38:52.711: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.218972734s Apr 16 23:38:53.715: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.213992061s Apr 16 23:38:54.720: INFO: Verifying statefulset ss doesn't scale past 3 for another 209.40074ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9343 Apr 16 23:38:55.726: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9343 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 16 23:38:55.945: INFO: stderr: "I0416 23:38:55.869014 129 log.go:172] (0xc000a280b0) (0xc000548c80) Create stream\nI0416 23:38:55.869086 129 log.go:172] (0xc000a280b0) (0xc000548c80) Stream added, broadcasting: 1\nI0416 23:38:55.871537 129 log.go:172] (0xc000a280b0) Reply frame received for 1\nI0416 23:38:55.871574 129 log.go:172] (0xc000a280b0) (0xc000a42000) Create stream\nI0416 23:38:55.871589 129 log.go:172] (0xc000a280b0) (0xc000a42000) Stream added, broadcasting: 3\nI0416 23:38:55.872298 129 log.go:172] (0xc000a280b0) Reply frame received for 3\nI0416 23:38:55.872336 129 log.go:172] (0xc000a280b0) (0xc0007bd400) Create stream\nI0416 23:38:55.872358 129 log.go:172] (0xc000a280b0) (0xc0007bd400) Stream added, broadcasting: 5\nI0416 23:38:55.873068 129 log.go:172] (0xc000a280b0) Reply frame received for 5\nI0416 23:38:55.937661 129 log.go:172] (0xc000a280b0) Data frame received for 5\nI0416 23:38:55.937679 129 log.go:172] (0xc0007bd400) (5) Data frame handling\nI0416 23:38:55.937686 129 log.go:172] (0xc0007bd400) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0416 23:38:55.937697 129 log.go:172] (0xc000a280b0) Data frame received for 3\nI0416 23:38:55.937702 129 log.go:172] (0xc000a42000) (3) Data frame handling\nI0416 23:38:55.937714 129 log.go:172] (0xc000a42000) (3) Data frame sent\nI0416 23:38:55.937842 129 log.go:172] (0xc000a280b0) Data frame received for 5\nI0416 23:38:55.937876 129 log.go:172] (0xc0007bd400) (5) Data frame handling\nI0416 23:38:55.938046 129 log.go:172] (0xc000a280b0) Data frame received for 3\nI0416 23:38:55.938057 129 log.go:172] (0xc000a42000) (3) Data frame handling\nI0416 23:38:55.939913 129 log.go:172] (0xc000a280b0) Data frame received for 1\nI0416 23:38:55.939929 129 log.go:172] (0xc000548c80) (1) Data frame handling\nI0416 23:38:55.939935 129 log.go:172] (0xc000548c80) (1) Data frame sent\nI0416 23:38:55.940150 129 log.go:172] (0xc000a280b0) (0xc000548c80) Stream removed, broadcasting: 1\nI0416 23:38:55.940207 129 log.go:172] (0xc000a280b0) Go away received\nI0416 23:38:55.940438 129 log.go:172] (0xc000a280b0) (0xc000548c80) Stream removed, broadcasting: 1\nI0416 23:38:55.940451 129 log.go:172] (0xc000a280b0) (0xc000a42000) Stream removed, broadcasting: 3\nI0416 23:38:55.940457 129 log.go:172] (0xc000a280b0) (0xc0007bd400) Stream removed, broadcasting: 5\n" Apr 16 23:38:55.945: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 16 23:38:55.945: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 16 23:38:55.945: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9343 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 16 23:38:56.172: INFO: stderr: "I0416 23:38:56.080537 150 log.go:172] (0xc0009a84d0) (0xc00067b4a0) Create stream\nI0416 23:38:56.080604 150 log.go:172] (0xc0009a84d0) (0xc00067b4a0) Stream added, broadcasting: 1\nI0416 23:38:56.091585 150 log.go:172] (0xc0009a84d0) Reply frame received for 1\nI0416 23:38:56.091633 150 log.go:172] (0xc0009a84d0) (0xc0006a8000) Create stream\nI0416 23:38:56.091644 150 log.go:172] (0xc0009a84d0) (0xc0006a8000) Stream added, broadcasting: 3\nI0416 23:38:56.093936 150 log.go:172] (0xc0009a84d0) Reply frame received for 3\nI0416 23:38:56.093975 150 log.go:172] (0xc0009a84d0) (0xc0006a8140) Create stream\nI0416 23:38:56.093983 150 log.go:172] (0xc0009a84d0) (0xc0006a8140) Stream added, broadcasting: 5\nI0416 23:38:56.094942 150 log.go:172] (0xc0009a84d0) Reply frame received for 5\nI0416 23:38:56.165099 150 log.go:172] (0xc0009a84d0) Data frame received for 3\nI0416 23:38:56.165237 150 log.go:172] (0xc0006a8000) (3) Data frame handling\nI0416 23:38:56.165257 150 log.go:172] (0xc0006a8000) (3) Data frame sent\nI0416 23:38:56.165268 150 log.go:172] (0xc0009a84d0) Data frame received for 3\nI0416 23:38:56.165277 150 log.go:172] (0xc0006a8000) (3) Data frame handling\nI0416 23:38:56.165299 150 log.go:172] (0xc0009a84d0) Data frame received for 5\nI0416 23:38:56.165323 150 log.go:172] (0xc0006a8140) (5) Data frame handling\nI0416 23:38:56.165347 150 log.go:172] (0xc0006a8140) (5) Data frame sent\nI0416 23:38:56.165360 150 log.go:172] (0xc0009a84d0) Data frame received for 5\nI0416 23:38:56.165370 150 log.go:172] (0xc0006a8140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0416 23:38:56.166748 150 log.go:172] (0xc0009a84d0) Data frame received for 1\nI0416 23:38:56.166868 150 log.go:172] (0xc00067b4a0) (1) Data frame handling\nI0416 23:38:56.166908 150 log.go:172] (0xc00067b4a0) (1) Data frame sent\nI0416 23:38:56.166930 150 log.go:172] (0xc0009a84d0) (0xc00067b4a0) Stream removed, broadcasting: 1\nI0416 23:38:56.166955 150 log.go:172] (0xc0009a84d0) Go away received\nI0416 23:38:56.167249 150 log.go:172] (0xc0009a84d0) (0xc00067b4a0) Stream removed, broadcasting: 1\nI0416 23:38:56.167280 150 log.go:172] (0xc0009a84d0) (0xc0006a8000) Stream removed, broadcasting: 3\nI0416 23:38:56.167288 150 log.go:172] (0xc0009a84d0) (0xc0006a8140) Stream removed, broadcasting: 5\n" Apr 16 23:38:56.172: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 16 23:38:56.172: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 16 23:38:56.172: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9343 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 16 23:38:56.413: INFO: stderr: "I0416 23:38:56.331701 172 log.go:172] (0xc000b08dc0) (0xc000bf2460) Create stream\nI0416 23:38:56.331777 172 log.go:172] (0xc000b08dc0) (0xc000bf2460) Stream added, broadcasting: 1\nI0416 23:38:56.334949 172 log.go:172] (0xc000b08dc0) Reply frame received for 1\nI0416 23:38:56.334991 172 log.go:172] (0xc000b08dc0) (0xc000a1a280) Create stream\nI0416 23:38:56.335004 172 log.go:172] (0xc000b08dc0) (0xc000a1a280) Stream added, broadcasting: 3\nI0416 23:38:56.335898 172 log.go:172] (0xc000b08dc0) Reply frame received for 3\nI0416 23:38:56.335921 172 log.go:172] (0xc000b08dc0) (0xc000a1a320) Create stream\nI0416 23:38:56.335935 172 log.go:172] (0xc000b08dc0) (0xc000a1a320) Stream added, broadcasting: 5\nI0416 23:38:56.336901 172 log.go:172] (0xc000b08dc0) Reply frame received for 5\nI0416 23:38:56.408691 172 log.go:172] (0xc000b08dc0) Data frame received for 5\nI0416 23:38:56.408719 172 log.go:172] (0xc000b08dc0) Data frame received for 3\nI0416 23:38:56.408734 172 log.go:172] (0xc000a1a280) (3) Data frame handling\nI0416 23:38:56.408742 172 log.go:172] (0xc000a1a280) (3) Data frame sent\nI0416 23:38:56.408747 172 log.go:172] (0xc000b08dc0) Data frame received for 3\nI0416 23:38:56.408751 172 log.go:172] (0xc000a1a280) (3) Data frame handling\nI0416 23:38:56.408766 172 log.go:172] (0xc000a1a320) (5) Data frame handling\nI0416 23:38:56.408772 172 log.go:172] (0xc000a1a320) (5) Data frame sent\nI0416 23:38:56.408776 172 log.go:172] (0xc000b08dc0) Data frame received for 5\nI0416 23:38:56.408784 172 log.go:172] (0xc000a1a320) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0416 23:38:56.409669 172 log.go:172] (0xc000b08dc0) Data frame received for 1\nI0416 23:38:56.409688 172 log.go:172] (0xc000bf2460) (1) Data frame handling\nI0416 23:38:56.409700 172 log.go:172] (0xc000bf2460) (1) Data frame sent\nI0416 23:38:56.409707 172 log.go:172] (0xc000b08dc0) (0xc000bf2460) Stream removed, broadcasting: 1\nI0416 23:38:56.409714 172 log.go:172] (0xc000b08dc0) Go away received\nI0416 23:38:56.409958 172 log.go:172] (0xc000b08dc0) (0xc000bf2460) Stream removed, broadcasting: 1\nI0416 23:38:56.409969 172 log.go:172] (0xc000b08dc0) (0xc000a1a280) Stream removed, broadcasting: 3\nI0416 23:38:56.409974 172 log.go:172] (0xc000b08dc0) (0xc000a1a320) Stream removed, broadcasting: 5\n" Apr 16 23:38:56.413: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 16 23:38:56.413: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 16 23:38:56.416: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 16 23:38:56.416: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 16 23:38:56.416: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Apr 16 23:38:56.418: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9343 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 16 23:38:56.609: INFO: stderr: "I0416 23:38:56.547128 191 log.go:172] (0xc00003adc0) (0xc00065d540) Create stream\nI0416 23:38:56.547191 191 log.go:172] (0xc00003adc0) (0xc00065d540) Stream added, broadcasting: 1\nI0416 23:38:56.550344 191 log.go:172] (0xc00003adc0) Reply frame received for 1\nI0416 23:38:56.550376 191 log.go:172] (0xc00003adc0) (0xc000328000) Create stream\nI0416 23:38:56.550385 191 log.go:172] (0xc00003adc0) (0xc000328000) Stream added, broadcasting: 3\nI0416 23:38:56.551400 191 log.go:172] (0xc00003adc0) Reply frame received for 3\nI0416 23:38:56.551469 191 log.go:172] (0xc00003adc0) (0xc0002a0a00) Create stream\nI0416 23:38:56.551502 191 log.go:172] (0xc00003adc0) (0xc0002a0a00) Stream added, broadcasting: 5\nI0416 23:38:56.552542 191 log.go:172] (0xc00003adc0) Reply frame received for 5\nI0416 23:38:56.601923 191 log.go:172] (0xc00003adc0) Data frame received for 5\nI0416 23:38:56.601957 191 log.go:172] (0xc0002a0a00) (5) Data frame handling\nI0416 23:38:56.601969 191 log.go:172] (0xc0002a0a00) (5) Data frame sent\nI0416 23:38:56.601976 191 log.go:172] (0xc00003adc0) Data frame received for 5\nI0416 23:38:56.601983 191 log.go:172] (0xc0002a0a00) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0416 23:38:56.602005 191 log.go:172] (0xc00003adc0) Data frame received for 3\nI0416 23:38:56.602014 191 log.go:172] (0xc000328000) (3) Data frame handling\nI0416 23:38:56.602025 191 log.go:172] (0xc000328000) (3) Data frame sent\nI0416 23:38:56.602032 191 log.go:172] (0xc00003adc0) Data frame received for 3\nI0416 23:38:56.602039 191 log.go:172] (0xc000328000) (3) Data frame handling\nI0416 23:38:56.603967 191 log.go:172] (0xc00003adc0) Data frame received for 1\nI0416 23:38:56.604010 191 log.go:172] (0xc00065d540) (1) Data frame handling\nI0416 23:38:56.604045 191 log.go:172] (0xc00065d540) (1) Data frame sent\nI0416 23:38:56.604081 191 log.go:172] (0xc00003adc0) (0xc00065d540) Stream removed, broadcasting: 1\nI0416 23:38:56.604115 191 log.go:172] (0xc00003adc0) Go away received\nI0416 23:38:56.604436 191 log.go:172] (0xc00003adc0) (0xc00065d540) Stream removed, broadcasting: 1\nI0416 23:38:56.604456 191 log.go:172] (0xc00003adc0) (0xc000328000) Stream removed, broadcasting: 3\nI0416 23:38:56.604467 191 log.go:172] (0xc00003adc0) (0xc0002a0a00) Stream removed, broadcasting: 5\n" Apr 16 23:38:56.609: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 16 23:38:56.609: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 16 23:38:56.609: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9343 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 16 23:38:56.850: INFO: stderr: "I0416 23:38:56.744799 213 log.go:172] (0xc000adca50) (0xc000a76500) Create stream\nI0416 23:38:56.744857 213 log.go:172] (0xc000adca50) (0xc000a76500) Stream added, broadcasting: 1\nI0416 23:38:56.751058 213 log.go:172] (0xc000adca50) Reply frame received for 1\nI0416 23:38:56.751107 213 log.go:172] (0xc000adca50) (0xc0006855e0) Create stream\nI0416 23:38:56.751121 213 log.go:172] (0xc000adca50) (0xc0006855e0) Stream added, broadcasting: 3\nI0416 23:38:56.752119 213 log.go:172] (0xc000adca50) Reply frame received for 3\nI0416 23:38:56.752186 213 log.go:172] (0xc000adca50) (0xc000556a00) Create stream\nI0416 23:38:56.752214 213 log.go:172] (0xc000adca50) (0xc000556a00) Stream added, broadcasting: 5\nI0416 23:38:56.753536 213 log.go:172] (0xc000adca50) Reply frame received for 5\nI0416 23:38:56.818908 213 log.go:172] (0xc000adca50) Data frame received for 5\nI0416 23:38:56.818939 213 log.go:172] (0xc000556a00) (5) Data frame handling\nI0416 23:38:56.818962 213 log.go:172] (0xc000556a00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0416 23:38:56.842975 213 log.go:172] (0xc000adca50) Data frame received for 3\nI0416 23:38:56.843028 213 log.go:172] (0xc0006855e0) (3) Data frame handling\nI0416 23:38:56.843078 213 log.go:172] (0xc0006855e0) (3) Data frame sent\nI0416 23:38:56.843117 213 log.go:172] (0xc000adca50) Data frame received for 3\nI0416 23:38:56.843132 213 log.go:172] (0xc0006855e0) (3) Data frame handling\nI0416 23:38:56.843258 213 log.go:172] (0xc000adca50) Data frame received for 5\nI0416 23:38:56.843291 213 log.go:172] (0xc000556a00) (5) Data frame handling\nI0416 23:38:56.844710 213 log.go:172] (0xc000adca50) Data frame received for 1\nI0416 23:38:56.844734 213 log.go:172] (0xc000a76500) (1) Data frame handling\nI0416 23:38:56.844766 213 log.go:172] (0xc000a76500) (1) Data frame sent\nI0416 23:38:56.844786 213 log.go:172] (0xc000adca50) (0xc000a76500) Stream removed, broadcasting: 1\nI0416 23:38:56.844804 213 log.go:172] (0xc000adca50) Go away received\nI0416 23:38:56.845525 213 log.go:172] (0xc000adca50) (0xc000a76500) Stream removed, broadcasting: 1\nI0416 23:38:56.845552 213 log.go:172] (0xc000adca50) (0xc0006855e0) Stream removed, broadcasting: 3\nI0416 23:38:56.845570 213 log.go:172] (0xc000adca50) (0xc000556a00) Stream removed, broadcasting: 5\n" Apr 16 23:38:56.850: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 16 23:38:56.850: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 16 23:38:56.850: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9343 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 16 23:38:57.108: INFO: stderr: "I0416 23:38:56.986475 235 log.go:172] (0xc0006986e0) (0xc0007e7180) Create stream\nI0416 23:38:56.986548 235 log.go:172] (0xc0006986e0) (0xc0007e7180) Stream added, broadcasting: 1\nI0416 23:38:56.990131 235 log.go:172] (0xc0006986e0) Reply frame received for 1\nI0416 23:38:56.990183 235 log.go:172] (0xc0006986e0) (0xc0009d6000) Create stream\nI0416 23:38:56.990200 235 log.go:172] (0xc0006986e0) (0xc0009d6000) Stream added, broadcasting: 3\nI0416 23:38:56.991196 235 log.go:172] (0xc0006986e0) Reply frame received for 3\nI0416 23:38:56.991235 235 log.go:172] (0xc0006986e0) (0xc000404000) Create stream\nI0416 23:38:56.991247 235 log.go:172] (0xc0006986e0) (0xc000404000) Stream added, broadcasting: 5\nI0416 23:38:56.992179 235 log.go:172] (0xc0006986e0) Reply frame received for 5\nI0416 23:38:57.061013 235 log.go:172] (0xc0006986e0) Data frame received for 5\nI0416 23:38:57.061057 235 log.go:172] (0xc000404000) (5) Data frame handling\nI0416 23:38:57.061091 235 log.go:172] (0xc000404000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0416 23:38:57.100998 235 log.go:172] (0xc0006986e0) Data frame received for 5\nI0416 23:38:57.101043 235 log.go:172] (0xc000404000) (5) Data frame handling\nI0416 23:38:57.101229 235 log.go:172] (0xc0006986e0) Data frame received for 3\nI0416 23:38:57.101286 235 log.go:172] (0xc0009d6000) (3) Data frame handling\nI0416 23:38:57.101318 235 log.go:172] (0xc0009d6000) (3) Data frame sent\nI0416 23:38:57.101341 235 log.go:172] (0xc0006986e0) Data frame received for 3\nI0416 23:38:57.101350 235 log.go:172] (0xc0009d6000) (3) Data frame handling\nI0416 23:38:57.103141 235 log.go:172] (0xc0006986e0) Data frame received for 1\nI0416 23:38:57.103164 235 log.go:172] (0xc0007e7180) (1) Data frame handling\nI0416 23:38:57.103177 235 log.go:172] (0xc0007e7180) (1) Data frame sent\nI0416 23:38:57.103190 235 log.go:172] (0xc0006986e0) (0xc0007e7180) Stream removed, broadcasting: 1\nI0416 23:38:57.103329 235 log.go:172] (0xc0006986e0) Go away received\nI0416 23:38:57.103651 235 log.go:172] (0xc0006986e0) (0xc0007e7180) Stream removed, broadcasting: 1\nI0416 23:38:57.103687 235 log.go:172] (0xc0006986e0) (0xc0009d6000) Stream removed, broadcasting: 3\nI0416 23:38:57.103707 235 log.go:172] (0xc0006986e0) (0xc000404000) Stream removed, broadcasting: 5\n" Apr 16 23:38:57.108: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 16 23:38:57.108: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 16 23:38:57.108: INFO: Waiting for statefulset status.replicas updated to 0 Apr 16 23:38:57.126: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Apr 16 23:39:07.134: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 16 23:39:07.134: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 16 23:39:07.134: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 16 23:39:07.167: INFO: POD NODE PHASE GRACE CONDITIONS Apr 16 23:39:07.167: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:24 +0000 UTC }] Apr 16 23:39:07.168: INFO: ss-1 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:44 +0000 UTC }] Apr 16 23:39:07.168: INFO: ss-2 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:44 +0000 UTC }] Apr 16 23:39:07.168: INFO: Apr 16 23:39:07.168: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 16 23:39:08.174: INFO: POD NODE PHASE GRACE CONDITIONS Apr 16 23:39:08.174: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:24 +0000 UTC }] Apr 16 23:39:08.174: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:44 +0000 UTC }] Apr 16 23:39:08.174: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:44 +0000 UTC }] Apr 16 23:39:08.174: INFO: Apr 16 23:39:08.174: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 16 23:39:09.178: INFO: POD NODE PHASE GRACE CONDITIONS Apr 16 23:39:09.178: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:24 +0000 UTC }] Apr 16 23:39:09.178: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:44 +0000 UTC }] Apr 16 23:39:09.178: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:44 +0000 UTC }] Apr 16 23:39:09.178: INFO: Apr 16 23:39:09.178: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 16 23:39:10.183: INFO: POD NODE PHASE GRACE CONDITIONS Apr 16 23:39:10.183: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:24 +0000 UTC }] Apr 16 23:39:10.183: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:44 +0000 UTC }] Apr 16 23:39:10.183: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:44 +0000 UTC }] Apr 16 23:39:10.183: INFO: Apr 16 23:39:10.183: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 16 23:39:11.189: INFO: POD NODE PHASE GRACE CONDITIONS Apr 16 23:39:11.189: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:24 +0000 UTC }] Apr 16 23:39:11.189: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:44 +0000 UTC }] Apr 16 23:39:11.189: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:44 +0000 UTC }] Apr 16 23:39:11.189: INFO: Apr 16 23:39:11.189: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 16 23:39:12.198: INFO: POD NODE PHASE GRACE CONDITIONS Apr 16 23:39:12.198: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:24 +0000 UTC }] Apr 16 23:39:12.198: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:44 +0000 UTC }] Apr 16 23:39:12.198: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 23:38:44 +0000 UTC }] Apr 16 23:39:12.198: INFO: Apr 16 23:39:12.198: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 16 23:39:13.202: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.943558189s Apr 16 23:39:14.206: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.939624596s Apr 16 23:39:15.210: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.935555514s Apr 16 23:39:16.214: INFO: Verifying statefulset ss doesn't scale past 0 for another 931.412874ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9343 Apr 16 23:39:17.218: INFO: Scaling statefulset ss to 0 Apr 16 23:39:17.228: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 16 23:39:17.231: INFO: Deleting all statefulset in ns statefulset-9343 Apr 16 23:39:17.234: INFO: Scaling statefulset ss to 0 Apr 16 23:39:17.241: INFO: Waiting for statefulset status.replicas updated to 0 Apr 16 23:39:17.243: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:39:17.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9343" for this suite. • [SLOW TEST:52.760 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":9,"skipped":163,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:39:17.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-816edcd6-88ee-485a-83f1-4a142e2259dd STEP: Creating a pod to test consume configMaps Apr 16 23:39:17.361: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a4680ea1-9b43-4069-b899-08daa25ad74e" in namespace "projected-9312" to be "Succeeded or Failed" Apr 16 23:39:17.367: INFO: Pod "pod-projected-configmaps-a4680ea1-9b43-4069-b899-08daa25ad74e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.819416ms Apr 16 23:39:19.371: INFO: Pod "pod-projected-configmaps-a4680ea1-9b43-4069-b899-08daa25ad74e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009879816s Apr 16 23:39:21.375: INFO: Pod "pod-projected-configmaps-a4680ea1-9b43-4069-b899-08daa25ad74e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013871458s STEP: Saw pod success Apr 16 23:39:21.375: INFO: Pod "pod-projected-configmaps-a4680ea1-9b43-4069-b899-08daa25ad74e" satisfied condition "Succeeded or Failed" Apr 16 23:39:21.378: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-a4680ea1-9b43-4069-b899-08daa25ad74e container projected-configmap-volume-test: STEP: delete the pod Apr 16 23:39:21.400: INFO: Waiting for pod pod-projected-configmaps-a4680ea1-9b43-4069-b899-08daa25ad74e to disappear Apr 16 23:39:21.403: INFO: Pod pod-projected-configmaps-a4680ea1-9b43-4069-b899-08daa25ad74e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:39:21.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9312" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":10,"skipped":175,"failed":0} ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:39:21.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 16 23:39:21.523: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 16 23:39:21.529: INFO: Number of nodes with available pods: 0 Apr 16 23:39:21.529: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 16 23:39:21.565: INFO: Number of nodes with available pods: 0 Apr 16 23:39:21.565: INFO: Node latest-worker is running more than one daemon pod Apr 16 23:39:22.623: INFO: Number of nodes with available pods: 0 Apr 16 23:39:22.623: INFO: Node latest-worker is running more than one daemon pod Apr 16 23:39:23.572: INFO: Number of nodes with available pods: 0 Apr 16 23:39:23.572: INFO: Node latest-worker is running more than one daemon pod Apr 16 23:39:24.570: INFO: Number of nodes with available pods: 1 Apr 16 23:39:24.570: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 16 23:39:24.604: INFO: Number of nodes with available pods: 1 Apr 16 23:39:24.604: INFO: Number of running nodes: 0, number of available pods: 1 Apr 16 23:39:25.609: INFO: Number of nodes with available pods: 0 Apr 16 23:39:25.609: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 16 23:39:25.620: INFO: Number of nodes with available pods: 0 Apr 16 23:39:25.620: INFO: Node latest-worker is running more than one daemon pod Apr 16 23:39:27.636: INFO: Number of nodes with available pods: 0 Apr 16 23:39:27.636: INFO: Node latest-worker is running more than one daemon pod Apr 16 23:39:28.625: INFO: Number of nodes with available pods: 0 Apr 16 23:39:28.625: INFO: Node latest-worker is running more than one daemon pod Apr 16 23:39:29.624: INFO: Number of nodes with available pods: 0 Apr 16 23:39:29.625: INFO: Node latest-worker is running more than one daemon pod Apr 16 23:39:30.625: INFO: Number of nodes with available pods: 0 Apr 16 23:39:30.625: INFO: Node latest-worker is running more than one daemon pod Apr 16 23:39:31.624: INFO: Number of nodes with available pods: 0 Apr 16 23:39:31.624: INFO: Node latest-worker is running more than one daemon pod Apr 16 23:39:32.624: INFO: Number of nodes with available pods: 0 Apr 16 23:39:32.624: INFO: Node latest-worker is running more than one daemon pod Apr 16 23:39:33.624: INFO: Number of nodes with available pods: 0 Apr 16 23:39:33.624: INFO: Node latest-worker is running more than one daemon pod Apr 16 23:39:34.624: INFO: Number of nodes with available pods: 0 Apr 16 23:39:34.624: INFO: Node latest-worker is running more than one daemon pod Apr 16 23:39:35.624: INFO: Number of nodes with available pods: 0 Apr 16 23:39:35.624: INFO: Node latest-worker is running more than one daemon pod Apr 16 23:39:36.624: INFO: Number of nodes with available pods: 1 Apr 16 23:39:36.624: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2554, will wait for the garbage collector to delete the pods Apr 16 23:39:36.693: INFO: Deleting DaemonSet.extensions daemon-set took: 9.021077ms Apr 16 23:39:36.993: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.303226ms Apr 16 23:39:42.796: INFO: Number of nodes with available pods: 0 Apr 16 23:39:42.796: INFO: Number of running nodes: 0, number of available pods: 0 Apr 16 23:39:42.803: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2554/daemonsets","resourceVersion":"8655914"},"items":null} Apr 16 23:39:42.805: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2554/pods","resourceVersion":"8655914"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:39:42.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2554" for this suite. • [SLOW TEST:21.458 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":11,"skipped":175,"failed":0} SSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:39:42.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Apr 16 23:39:42.927: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the sample API server. Apr 16 23:39:43.499: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Apr 16 23:39:45.692: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722677183, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722677183, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722677183, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722677183, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 16 23:39:47.697: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722677183, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722677183, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722677183, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722677183, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 16 23:39:50.352: INFO: Waited 641.486149ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:39:50.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-3634" for this suite. • [SLOW TEST:8.044 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":12,"skipped":178,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:39:50.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-projected-all-test-volume-a5deb44d-674d-4515-a96c-ed754d0c28f9 STEP: Creating secret with name secret-projected-all-test-volume-d3aedaa6-cf84-482d-a6b5-237c46378da9 STEP: Creating a pod to test Check all projections for projected volume plugin Apr 16 23:39:51.064: INFO: Waiting up to 5m0s for pod "projected-volume-9d6e7561-8b9c-42fd-a90a-739410e6cd22" in namespace "projected-5414" to be "Succeeded or Failed" Apr 16 23:39:51.180: INFO: Pod "projected-volume-9d6e7561-8b9c-42fd-a90a-739410e6cd22": Phase="Pending", Reason="", readiness=false. Elapsed: 115.320262ms Apr 16 23:39:53.258: INFO: Pod "projected-volume-9d6e7561-8b9c-42fd-a90a-739410e6cd22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193317508s Apr 16 23:39:55.262: INFO: Pod "projected-volume-9d6e7561-8b9c-42fd-a90a-739410e6cd22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.197788272s STEP: Saw pod success Apr 16 23:39:55.262: INFO: Pod "projected-volume-9d6e7561-8b9c-42fd-a90a-739410e6cd22" satisfied condition "Succeeded or Failed" Apr 16 23:39:55.266: INFO: Trying to get logs from node latest-worker pod projected-volume-9d6e7561-8b9c-42fd-a90a-739410e6cd22 container projected-all-volume-test: STEP: delete the pod Apr 16 23:39:55.338: INFO: Waiting for pod projected-volume-9d6e7561-8b9c-42fd-a90a-739410e6cd22 to disappear Apr 16 23:39:55.350: INFO: Pod projected-volume-9d6e7561-8b9c-42fd-a90a-739410e6cd22 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:39:55.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5414" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":13,"skipped":197,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:39:55.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 16 23:39:56.092: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 16 23:39:58.104: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722677196, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722677196, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722677196, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722677196, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 16 23:40:01.123: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 16 23:40:01.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:40:02.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-5140" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:6.979 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":14,"skipped":230,"failed":0} SSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:40:02.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 16 23:40:02.438: INFO: Waiting up to 5m0s for pod "downward-api-88fe31bd-846e-40db-b201-f56a42b3f918" in namespace "downward-api-4590" to be "Succeeded or Failed" Apr 16 23:40:02.447: INFO: Pod "downward-api-88fe31bd-846e-40db-b201-f56a42b3f918": Phase="Pending", Reason="", readiness=false. Elapsed: 9.875264ms Apr 16 23:40:04.452: INFO: Pod "downward-api-88fe31bd-846e-40db-b201-f56a42b3f918": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013992617s Apr 16 23:40:06.457: INFO: Pod "downward-api-88fe31bd-846e-40db-b201-f56a42b3f918": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019544004s STEP: Saw pod success Apr 16 23:40:06.457: INFO: Pod "downward-api-88fe31bd-846e-40db-b201-f56a42b3f918" satisfied condition "Succeeded or Failed" Apr 16 23:40:06.460: INFO: Trying to get logs from node latest-worker2 pod downward-api-88fe31bd-846e-40db-b201-f56a42b3f918 container dapi-container: STEP: delete the pod Apr 16 23:40:06.502: INFO: Waiting for pod downward-api-88fe31bd-846e-40db-b201-f56a42b3f918 to disappear Apr 16 23:40:06.506: INFO: Pod downward-api-88fe31bd-846e-40db-b201-f56a42b3f918 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:40:06.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4590" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":15,"skipped":237,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:40:06.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 16 23:40:06.562: INFO: Waiting up to 5m0s for pod "pod-c93dbdef-2750-482a-b6a2-7630618a891c" in namespace "emptydir-4750" to be "Succeeded or Failed" Apr 16 23:40:06.577: INFO: Pod "pod-c93dbdef-2750-482a-b6a2-7630618a891c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.929781ms Apr 16 23:40:08.581: INFO: Pod "pod-c93dbdef-2750-482a-b6a2-7630618a891c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019264203s Apr 16 23:40:10.585: INFO: Pod "pod-c93dbdef-2750-482a-b6a2-7630618a891c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023275185s STEP: Saw pod success Apr 16 23:40:10.585: INFO: Pod "pod-c93dbdef-2750-482a-b6a2-7630618a891c" satisfied condition "Succeeded or Failed" Apr 16 23:40:10.588: INFO: Trying to get logs from node latest-worker2 pod pod-c93dbdef-2750-482a-b6a2-7630618a891c container test-container: STEP: delete the pod Apr 16 23:40:10.609: INFO: Waiting for pod pod-c93dbdef-2750-482a-b6a2-7630618a891c to disappear Apr 16 23:40:10.632: INFO: Pod pod-c93dbdef-2750-482a-b6a2-7630618a891c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:40:10.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4750" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":16,"skipped":240,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:40:10.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 16 23:40:10.724: INFO: Waiting up to 5m0s for pod "downwardapi-volume-46b76421-95fb-4586-8afa-4def7bd2b7e8" in namespace "downward-api-5332" to be "Succeeded or Failed" Apr 16 23:40:10.728: INFO: Pod "downwardapi-volume-46b76421-95fb-4586-8afa-4def7bd2b7e8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.682409ms Apr 16 23:40:12.733: INFO: Pod "downwardapi-volume-46b76421-95fb-4586-8afa-4def7bd2b7e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008246052s Apr 16 23:40:14.737: INFO: Pod "downwardapi-volume-46b76421-95fb-4586-8afa-4def7bd2b7e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01275062s STEP: Saw pod success Apr 16 23:40:14.737: INFO: Pod "downwardapi-volume-46b76421-95fb-4586-8afa-4def7bd2b7e8" satisfied condition "Succeeded or Failed" Apr 16 23:40:14.740: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-46b76421-95fb-4586-8afa-4def7bd2b7e8 container client-container: STEP: delete the pod Apr 16 23:40:14.774: INFO: Waiting for pod downwardapi-volume-46b76421-95fb-4586-8afa-4def7bd2b7e8 to disappear Apr 16 23:40:14.782: INFO: Pod downwardapi-volume-46b76421-95fb-4586-8afa-4def7bd2b7e8 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:40:14.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5332" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":17,"skipped":277,"failed":0} SSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:40:14.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap that has name configmap-test-emptyKey-2fa3aa2c-ef27-493a-b5ba-50303a803a64 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:40:14.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1622" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":18,"skipped":280,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:40:14.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:40:28.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9332" for this suite. • [SLOW TEST:13.262 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":19,"skipped":300,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:40:28.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-4e3ed3bc-b917-43a7-b95e-b905ec349a67 in namespace container-probe-7601 Apr 16 23:40:32.220: INFO: Started pod busybox-4e3ed3bc-b917-43a7-b95e-b905ec349a67 in namespace container-probe-7601 STEP: checking the pod's current state and verifying that restartCount is present Apr 16 23:40:32.223: INFO: Initial restart count of pod busybox-4e3ed3bc-b917-43a7-b95e-b905ec349a67 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:44:32.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7601" for this suite. • [SLOW TEST:244.742 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":20,"skipped":314,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:44:32.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 16 23:44:32.937: INFO: Waiting up to 5m0s for pod "downwardapi-volume-71960922-85a2-43be-88f9-efc49319e6db" in namespace "projected-5347" to be "Succeeded or Failed" Apr 16 23:44:32.953: INFO: Pod "downwardapi-volume-71960922-85a2-43be-88f9-efc49319e6db": Phase="Pending", Reason="", readiness=false. Elapsed: 15.352395ms Apr 16 23:44:34.957: INFO: Pod "downwardapi-volume-71960922-85a2-43be-88f9-efc49319e6db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019795527s Apr 16 23:44:36.961: INFO: Pod "downwardapi-volume-71960922-85a2-43be-88f9-efc49319e6db": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024213339s Apr 16 23:44:38.966: INFO: Pod "downwardapi-volume-71960922-85a2-43be-88f9-efc49319e6db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028662835s STEP: Saw pod success Apr 16 23:44:38.966: INFO: Pod "downwardapi-volume-71960922-85a2-43be-88f9-efc49319e6db" satisfied condition "Succeeded or Failed" Apr 16 23:44:38.969: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-71960922-85a2-43be-88f9-efc49319e6db container client-container: STEP: delete the pod Apr 16 23:44:38.996: INFO: Waiting for pod downwardapi-volume-71960922-85a2-43be-88f9-efc49319e6db to disappear Apr 16 23:44:39.001: INFO: Pod downwardapi-volume-71960922-85a2-43be-88f9-efc49319e6db no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:44:39.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5347" for this suite. • [SLOW TEST:6.150 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":21,"skipped":327,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:44:39.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Apr 16 23:44:39.069: INFO: >>> kubeConfig: /root/.kube/config Apr 16 23:44:41.010: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:44:51.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5408" for this suite. • [SLOW TEST:12.721 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":22,"skipped":327,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:44:51.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 16 23:44:56.358: INFO: Successfully updated pod "annotationupdate5c42255c-4170-41aa-b50c-415e056a7914" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:44:58.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-948" for this suite. • [SLOW TEST:6.653 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":23,"skipped":339,"failed":0} [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:44:58.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-47c57443-33b2-4de5-aeb0-423cf9f598c3 STEP: Creating a pod to test consume configMaps Apr 16 23:44:58.502: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7a9282eb-495b-465c-af2d-7b211114d57c" in namespace "projected-7182" to be "Succeeded or Failed" Apr 16 23:44:58.526: INFO: Pod "pod-projected-configmaps-7a9282eb-495b-465c-af2d-7b211114d57c": Phase="Pending", Reason="", readiness=false. Elapsed: 23.449396ms Apr 16 23:45:00.530: INFO: Pod "pod-projected-configmaps-7a9282eb-495b-465c-af2d-7b211114d57c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027673231s Apr 16 23:45:02.535: INFO: Pod "pod-projected-configmaps-7a9282eb-495b-465c-af2d-7b211114d57c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032413357s STEP: Saw pod success Apr 16 23:45:02.535: INFO: Pod "pod-projected-configmaps-7a9282eb-495b-465c-af2d-7b211114d57c" satisfied condition "Succeeded or Failed" Apr 16 23:45:02.538: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-7a9282eb-495b-465c-af2d-7b211114d57c container projected-configmap-volume-test: STEP: delete the pod Apr 16 23:45:02.566: INFO: Waiting for pod pod-projected-configmaps-7a9282eb-495b-465c-af2d-7b211114d57c to disappear Apr 16 23:45:02.588: INFO: Pod pod-projected-configmaps-7a9282eb-495b-465c-af2d-7b211114d57c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:45:02.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7182" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":24,"skipped":339,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:45:02.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 16 23:45:02.668: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:45:03.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9265" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":275,"completed":25,"skipped":363,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:45:03.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-3212 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 16 23:45:03.965: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 16 23:45:04.022: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 16 23:45:06.026: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 16 23:45:08.027: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 16 23:45:10.027: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 16 23:45:12.027: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 16 23:45:14.027: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 16 23:45:16.027: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 16 23:45:18.027: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 16 23:45:20.027: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 16 23:45:22.026: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 16 23:45:24.027: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 16 23:45:24.032: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 16 23:45:28.077: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.129:8080/dial?request=hostname&protocol=udp&host=10.244.2.162&port=8081&tries=1'] Namespace:pod-network-test-3212 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 16 23:45:28.077: INFO: >>> kubeConfig: /root/.kube/config I0416 23:45:28.118254 7 log.go:172] (0xc002d5c9a0) (0xc000913540) Create stream I0416 23:45:28.118291 7 log.go:172] (0xc002d5c9a0) (0xc000913540) Stream added, broadcasting: 1 I0416 23:45:28.122733 7 log.go:172] (0xc002d5c9a0) Reply frame received for 1 I0416 23:45:28.122849 7 log.go:172] (0xc002d5c9a0) (0xc000c76be0) Create stream I0416 23:45:28.122918 7 log.go:172] (0xc002d5c9a0) (0xc000c76be0) Stream added, broadcasting: 3 I0416 23:45:28.125990 7 log.go:172] (0xc002d5c9a0) Reply frame received for 3 I0416 23:45:28.126040 7 log.go:172] (0xc002d5c9a0) (0xc0009135e0) Create stream I0416 23:45:28.126073 7 log.go:172] (0xc002d5c9a0) (0xc0009135e0) Stream added, broadcasting: 5 I0416 23:45:28.129553 7 log.go:172] (0xc002d5c9a0) Reply frame received for 5 I0416 23:45:28.199974 7 log.go:172] (0xc002d5c9a0) Data frame received for 3 I0416 23:45:28.200019 7 log.go:172] (0xc000c76be0) (3) Data frame handling I0416 23:45:28.200049 7 log.go:172] (0xc000c76be0) (3) Data frame sent I0416 23:45:28.200426 7 log.go:172] (0xc002d5c9a0) Data frame received for 3 I0416 23:45:28.200453 7 log.go:172] (0xc000c76be0) (3) Data frame handling I0416 23:45:28.200488 7 log.go:172] (0xc002d5c9a0) Data frame received for 5 I0416 23:45:28.200540 7 log.go:172] (0xc0009135e0) (5) Data frame handling I0416 23:45:28.201970 7 log.go:172] (0xc002d5c9a0) Data frame received for 1 I0416 23:45:28.201998 7 log.go:172] (0xc000913540) (1) Data frame handling I0416 23:45:28.202015 7 log.go:172] (0xc000913540) (1) Data frame sent I0416 23:45:28.202137 7 log.go:172] (0xc002d5c9a0) (0xc000913540) Stream removed, broadcasting: 1 I0416 23:45:28.202172 7 log.go:172] (0xc002d5c9a0) Go away received I0416 23:45:28.202743 7 log.go:172] (0xc002d5c9a0) (0xc000913540) Stream removed, broadcasting: 1 I0416 23:45:28.202772 7 log.go:172] (0xc002d5c9a0) (0xc000c76be0) Stream removed, broadcasting: 3 I0416 23:45:28.202789 7 log.go:172] (0xc002d5c9a0) (0xc0009135e0) Stream removed, broadcasting: 5 Apr 16 23:45:28.202: INFO: Waiting for responses: map[] Apr 16 23:45:28.206: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.129:8080/dial?request=hostname&protocol=udp&host=10.244.1.128&port=8081&tries=1'] Namespace:pod-network-test-3212 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 16 23:45:28.206: INFO: >>> kubeConfig: /root/.kube/config I0416 23:45:28.236091 7 log.go:172] (0xc0027ef340) (0xc00085b9a0) Create stream I0416 23:45:28.236122 7 log.go:172] (0xc0027ef340) (0xc00085b9a0) Stream added, broadcasting: 1 I0416 23:45:28.238106 7 log.go:172] (0xc0027ef340) Reply frame received for 1 I0416 23:45:28.238141 7 log.go:172] (0xc0027ef340) (0xc00085bae0) Create stream I0416 23:45:28.238154 7 log.go:172] (0xc0027ef340) (0xc00085bae0) Stream added, broadcasting: 3 I0416 23:45:28.239169 7 log.go:172] (0xc0027ef340) Reply frame received for 3 I0416 23:45:28.239216 7 log.go:172] (0xc0027ef340) (0xc000c76c80) Create stream I0416 23:45:28.239237 7 log.go:172] (0xc0027ef340) (0xc000c76c80) Stream added, broadcasting: 5 I0416 23:45:28.240166 7 log.go:172] (0xc0027ef340) Reply frame received for 5 I0416 23:45:28.302046 7 log.go:172] (0xc0027ef340) Data frame received for 3 I0416 23:45:28.302093 7 log.go:172] (0xc00085bae0) (3) Data frame handling I0416 23:45:28.302110 7 log.go:172] (0xc00085bae0) (3) Data frame sent I0416 23:45:28.302536 7 log.go:172] (0xc0027ef340) Data frame received for 5 I0416 23:45:28.302561 7 log.go:172] (0xc0027ef340) Data frame received for 3 I0416 23:45:28.302596 7 log.go:172] (0xc00085bae0) (3) Data frame handling I0416 23:45:28.302623 7 log.go:172] (0xc000c76c80) (5) Data frame handling I0416 23:45:28.303922 7 log.go:172] (0xc0027ef340) Data frame received for 1 I0416 23:45:28.303950 7 log.go:172] (0xc00085b9a0) (1) Data frame handling I0416 23:45:28.303990 7 log.go:172] (0xc00085b9a0) (1) Data frame sent I0416 23:45:28.304014 7 log.go:172] (0xc0027ef340) (0xc00085b9a0) Stream removed, broadcasting: 1 I0416 23:45:28.304114 7 log.go:172] (0xc0027ef340) Go away received I0416 23:45:28.304153 7 log.go:172] (0xc0027ef340) (0xc00085b9a0) Stream removed, broadcasting: 1 I0416 23:45:28.304177 7 log.go:172] (0xc0027ef340) (0xc00085bae0) Stream removed, broadcasting: 3 I0416 23:45:28.304195 7 log.go:172] (0xc0027ef340) (0xc000c76c80) Stream removed, broadcasting: 5 Apr 16 23:45:28.304: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:45:28.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3212" for this suite. • [SLOW TEST:24.486 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":26,"skipped":379,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:45:28.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:45:44.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1581" for this suite. • [SLOW TEST:16.124 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":27,"skipped":432,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:45:44.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 16 23:45:44.502: INFO: Waiting up to 5m0s for pod "downwardapi-volume-50550d7c-d9f1-4c62-b4fb-06eff4bce821" in namespace "projected-842" to be "Succeeded or Failed" Apr 16 23:45:44.538: INFO: Pod "downwardapi-volume-50550d7c-d9f1-4c62-b4fb-06eff4bce821": Phase="Pending", Reason="", readiness=false. Elapsed: 36.332619ms Apr 16 23:45:46.542: INFO: Pod "downwardapi-volume-50550d7c-d9f1-4c62-b4fb-06eff4bce821": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040463055s Apr 16 23:45:49.178: INFO: Pod "downwardapi-volume-50550d7c-d9f1-4c62-b4fb-06eff4bce821": Phase="Pending", Reason="", readiness=false. Elapsed: 4.675962682s Apr 16 23:45:51.182: INFO: Pod "downwardapi-volume-50550d7c-d9f1-4c62-b4fb-06eff4bce821": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.680341817s STEP: Saw pod success Apr 16 23:45:51.182: INFO: Pod "downwardapi-volume-50550d7c-d9f1-4c62-b4fb-06eff4bce821" satisfied condition "Succeeded or Failed" Apr 16 23:45:51.186: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-50550d7c-d9f1-4c62-b4fb-06eff4bce821 container client-container: STEP: delete the pod Apr 16 23:45:51.220: INFO: Waiting for pod downwardapi-volume-50550d7c-d9f1-4c62-b4fb-06eff4bce821 to disappear Apr 16 23:45:51.236: INFO: Pod downwardapi-volume-50550d7c-d9f1-4c62-b4fb-06eff4bce821 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:45:51.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-842" for this suite. • [SLOW TEST:6.807 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":28,"skipped":441,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:45:51.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-ffd7d388-f06e-4889-a4c5-de3c000de29c STEP: Creating configMap with name cm-test-opt-upd-b48b0a80-a631-45b3-b37c-ab2f539369fd STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-ffd7d388-f06e-4889-a4c5-de3c000de29c STEP: Updating configmap cm-test-opt-upd-b48b0a80-a631-45b3-b37c-ab2f539369fd STEP: Creating configMap with name cm-test-opt-create-4863b4ec-ffeb-4c09-91f2-75c0ade53a64 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:47:07.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4873" for this suite. • [SLOW TEST:76.645 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":29,"skipped":491,"failed":0} [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:47:07.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 16 23:47:07.934: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:47:08.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9276" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":275,"completed":30,"skipped":491,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:47:08.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 16 23:47:08.631: INFO: PodSpec: initContainers in spec.initContainers Apr 16 23:48:02.929: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-abcfdd04-d3c9-4283-bcad-d40a4a5f268a", GenerateName:"", Namespace:"init-container-7237", SelfLink:"/api/v1/namespaces/init-container-7237/pods/pod-init-abcfdd04-d3c9-4283-bcad-d40a4a5f268a", UID:"3995ebc4-a23b-46fc-b89b-610d7f9eaa3a", ResourceVersion:"8658073", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63722677628, loc:(*time.Location)(0x7b1e080)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"631761089"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-vrhgn", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002aae080), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vrhgn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vrhgn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vrhgn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001fa19f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002cedb90), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001fa1b30)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001fa1b50)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001fa1b58), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001fa1b5c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722677628, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722677628, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722677628, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722677628, loc:(*time.Location)(0x7b1e080)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.12", PodIP:"10.244.1.132", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.132"}}, StartTime:(*v1.Time)(0xc003250180), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002cedc70)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002cedce0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://7981b12e81d8ca7e555d0d6c321c90b3e3ac1c2c2331df2cb09b67865215d15c", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0032501c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0032501a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc001fa1c0f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:48:02.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7237" for this suite. • [SLOW TEST:54.384 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":31,"skipped":559,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:48:02.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 16 23:48:07.596: INFO: Successfully updated pod "annotationupdate62f6ad92-c072-4b1a-9db0-345176228832" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:48:09.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9391" for this suite. • [SLOW TEST:7.083 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":32,"skipped":585,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:48:10.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-secret-w7r6 STEP: Creating a pod to test atomic-volume-subpath Apr 16 23:48:11.078: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-w7r6" in namespace "subpath-8936" to be "Succeeded or Failed" Apr 16 23:48:11.082: INFO: Pod "pod-subpath-test-secret-w7r6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.429995ms Apr 16 23:48:13.151: INFO: Pod "pod-subpath-test-secret-w7r6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073067825s Apr 16 23:48:15.159: INFO: Pod "pod-subpath-test-secret-w7r6": Phase="Running", Reason="", readiness=true. Elapsed: 4.08133885s Apr 16 23:48:17.163: INFO: Pod "pod-subpath-test-secret-w7r6": Phase="Running", Reason="", readiness=true. Elapsed: 6.085052852s Apr 16 23:48:19.169: INFO: Pod "pod-subpath-test-secret-w7r6": Phase="Running", Reason="", readiness=true. Elapsed: 8.090836642s Apr 16 23:48:21.195: INFO: Pod "pod-subpath-test-secret-w7r6": Phase="Running", Reason="", readiness=true. Elapsed: 10.116627689s Apr 16 23:48:23.205: INFO: Pod "pod-subpath-test-secret-w7r6": Phase="Running", Reason="", readiness=true. Elapsed: 12.127030053s Apr 16 23:48:25.229: INFO: Pod "pod-subpath-test-secret-w7r6": Phase="Running", Reason="", readiness=true. Elapsed: 14.1507182s Apr 16 23:48:27.233: INFO: Pod "pod-subpath-test-secret-w7r6": Phase="Running", Reason="", readiness=true. Elapsed: 16.154662054s Apr 16 23:48:29.236: INFO: Pod "pod-subpath-test-secret-w7r6": Phase="Running", Reason="", readiness=true. Elapsed: 18.1585255s Apr 16 23:48:31.241: INFO: Pod "pod-subpath-test-secret-w7r6": Phase="Running", Reason="", readiness=true. Elapsed: 20.162628315s Apr 16 23:48:33.271: INFO: Pod "pod-subpath-test-secret-w7r6": Phase="Running", Reason="", readiness=true. Elapsed: 22.192920293s Apr 16 23:48:35.276: INFO: Pod "pod-subpath-test-secret-w7r6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.198618474s STEP: Saw pod success Apr 16 23:48:35.277: INFO: Pod "pod-subpath-test-secret-w7r6" satisfied condition "Succeeded or Failed" Apr 16 23:48:35.280: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-secret-w7r6 container test-container-subpath-secret-w7r6: STEP: delete the pod Apr 16 23:48:35.328: INFO: Waiting for pod pod-subpath-test-secret-w7r6 to disappear Apr 16 23:48:35.350: INFO: Pod pod-subpath-test-secret-w7r6 no longer exists STEP: Deleting pod pod-subpath-test-secret-w7r6 Apr 16 23:48:35.350: INFO: Deleting pod "pod-subpath-test-secret-w7r6" in namespace "subpath-8936" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:48:35.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8936" for this suite. • [SLOW TEST:25.337 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":33,"skipped":634,"failed":0} [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:48:35.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service nodeport-test with type=NodePort in namespace services-389 STEP: creating replication controller nodeport-test in namespace services-389 I0416 23:48:35.563106 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-389, replica count: 2 I0416 23:48:38.613697 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0416 23:48:41.613929 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 16 23:48:41.613: INFO: Creating new exec pod Apr 16 23:48:46.643: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-389 execpodcckbb -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Apr 16 23:48:49.429: INFO: stderr: "I0416 23:48:49.320283 261 log.go:172] (0xc0008f0580) (0xc0007eb5e0) Create stream\nI0416 23:48:49.320318 261 log.go:172] (0xc0008f0580) (0xc0007eb5e0) Stream added, broadcasting: 1\nI0416 23:48:49.323260 261 log.go:172] (0xc0008f0580) Reply frame received for 1\nI0416 23:48:49.323307 261 log.go:172] (0xc0008f0580) (0xc000754000) Create stream\nI0416 23:48:49.323320 261 log.go:172] (0xc0008f0580) (0xc000754000) Stream added, broadcasting: 3\nI0416 23:48:49.324120 261 log.go:172] (0xc0008f0580) Reply frame received for 3\nI0416 23:48:49.324139 261 log.go:172] (0xc0008f0580) (0xc000760000) Create stream\nI0416 23:48:49.324145 261 log.go:172] (0xc0008f0580) (0xc000760000) Stream added, broadcasting: 5\nI0416 23:48:49.324918 261 log.go:172] (0xc0008f0580) Reply frame received for 5\nI0416 23:48:49.421429 261 log.go:172] (0xc0008f0580) Data frame received for 5\nI0416 23:48:49.421452 261 log.go:172] (0xc000760000) (5) Data frame handling\nI0416 23:48:49.421469 261 log.go:172] (0xc000760000) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0416 23:48:49.421958 261 log.go:172] (0xc0008f0580) Data frame received for 5\nI0416 23:48:49.421977 261 log.go:172] (0xc000760000) (5) Data frame handling\nI0416 23:48:49.422007 261 log.go:172] (0xc000760000) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0416 23:48:49.422306 261 log.go:172] (0xc0008f0580) Data frame received for 3\nI0416 23:48:49.422331 261 log.go:172] (0xc000754000) (3) Data frame handling\nI0416 23:48:49.422352 261 log.go:172] (0xc0008f0580) Data frame received for 5\nI0416 23:48:49.422362 261 log.go:172] (0xc000760000) (5) Data frame handling\nI0416 23:48:49.424081 261 log.go:172] (0xc0008f0580) Data frame received for 1\nI0416 23:48:49.424104 261 log.go:172] (0xc0007eb5e0) (1) Data frame handling\nI0416 23:48:49.424122 261 log.go:172] (0xc0007eb5e0) (1) Data frame sent\nI0416 23:48:49.424141 261 log.go:172] (0xc0008f0580) (0xc0007eb5e0) Stream removed, broadcasting: 1\nI0416 23:48:49.424236 261 log.go:172] (0xc0008f0580) Go away received\nI0416 23:48:49.424599 261 log.go:172] (0xc0008f0580) (0xc0007eb5e0) Stream removed, broadcasting: 1\nI0416 23:48:49.424616 261 log.go:172] (0xc0008f0580) (0xc000754000) Stream removed, broadcasting: 3\nI0416 23:48:49.424626 261 log.go:172] (0xc0008f0580) (0xc000760000) Stream removed, broadcasting: 5\n" Apr 16 23:48:49.429: INFO: stdout: "" Apr 16 23:48:49.430: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-389 execpodcckbb -- /bin/sh -x -c nc -zv -t -w 2 10.96.223.173 80' Apr 16 23:48:49.659: INFO: stderr: "I0416 23:48:49.572715 291 log.go:172] (0xc00003bb80) (0xc00070a320) Create stream\nI0416 23:48:49.572780 291 log.go:172] (0xc00003bb80) (0xc00070a320) Stream added, broadcasting: 1\nI0416 23:48:49.576776 291 log.go:172] (0xc00003bb80) Reply frame received for 1\nI0416 23:48:49.576833 291 log.go:172] (0xc00003bb80) (0xc00054f220) Create stream\nI0416 23:48:49.576871 291 log.go:172] (0xc00003bb80) (0xc00054f220) Stream added, broadcasting: 3\nI0416 23:48:49.578048 291 log.go:172] (0xc00003bb80) Reply frame received for 3\nI0416 23:48:49.578083 291 log.go:172] (0xc00003bb80) (0xc00070a3c0) Create stream\nI0416 23:48:49.578092 291 log.go:172] (0xc00003bb80) (0xc00070a3c0) Stream added, broadcasting: 5\nI0416 23:48:49.579353 291 log.go:172] (0xc00003bb80) Reply frame received for 5\nI0416 23:48:49.653664 291 log.go:172] (0xc00003bb80) Data frame received for 3\nI0416 23:48:49.653702 291 log.go:172] (0xc00054f220) (3) Data frame handling\nI0416 23:48:49.653722 291 log.go:172] (0xc00003bb80) Data frame received for 5\nI0416 23:48:49.653730 291 log.go:172] (0xc00070a3c0) (5) Data frame handling\nI0416 23:48:49.653740 291 log.go:172] (0xc00070a3c0) (5) Data frame sent\nI0416 23:48:49.653747 291 log.go:172] (0xc00003bb80) Data frame received for 5\nI0416 23:48:49.653753 291 log.go:172] (0xc00070a3c0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.223.173 80\nConnection to 10.96.223.173 80 port [tcp/http] succeeded!\nI0416 23:48:49.655058 291 log.go:172] (0xc00003bb80) Data frame received for 1\nI0416 23:48:49.655077 291 log.go:172] (0xc00070a320) (1) Data frame handling\nI0416 23:48:49.655099 291 log.go:172] (0xc00070a320) (1) Data frame sent\nI0416 23:48:49.655114 291 log.go:172] (0xc00003bb80) (0xc00070a320) Stream removed, broadcasting: 1\nI0416 23:48:49.655186 291 log.go:172] (0xc00003bb80) Go away received\nI0416 23:48:49.655571 291 log.go:172] (0xc00003bb80) (0xc00070a320) Stream removed, broadcasting: 1\nI0416 23:48:49.655611 291 log.go:172] (0xc00003bb80) (0xc00054f220) Stream removed, broadcasting: 3\nI0416 23:48:49.655624 291 log.go:172] (0xc00003bb80) (0xc00070a3c0) Stream removed, broadcasting: 5\n" Apr 16 23:48:49.659: INFO: stdout: "" Apr 16 23:48:49.659: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-389 execpodcckbb -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 32661' Apr 16 23:48:49.857: INFO: stderr: "I0416 23:48:49.778552 313 log.go:172] (0xc0009ec0b0) (0xc000502b40) Create stream\nI0416 23:48:49.778609 313 log.go:172] (0xc0009ec0b0) (0xc000502b40) Stream added, broadcasting: 1\nI0416 23:48:49.781007 313 log.go:172] (0xc0009ec0b0) Reply frame received for 1\nI0416 23:48:49.781051 313 log.go:172] (0xc0009ec0b0) (0xc0009b6000) Create stream\nI0416 23:48:49.781063 313 log.go:172] (0xc0009ec0b0) (0xc0009b6000) Stream added, broadcasting: 3\nI0416 23:48:49.781873 313 log.go:172] (0xc0009ec0b0) Reply frame received for 3\nI0416 23:48:49.781908 313 log.go:172] (0xc0009ec0b0) (0xc0008ee000) Create stream\nI0416 23:48:49.781920 313 log.go:172] (0xc0009ec0b0) (0xc0008ee000) Stream added, broadcasting: 5\nI0416 23:48:49.782753 313 log.go:172] (0xc0009ec0b0) Reply frame received for 5\nI0416 23:48:49.850625 313 log.go:172] (0xc0009ec0b0) Data frame received for 5\nI0416 23:48:49.850651 313 log.go:172] (0xc0008ee000) (5) Data frame handling\nI0416 23:48:49.850659 313 log.go:172] (0xc0008ee000) (5) Data frame sent\nI0416 23:48:49.850664 313 log.go:172] (0xc0009ec0b0) Data frame received for 5\nI0416 23:48:49.850670 313 log.go:172] (0xc0008ee000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 32661\nConnection to 172.17.0.13 32661 port [tcp/32661] succeeded!\nI0416 23:48:49.850685 313 log.go:172] (0xc0009ec0b0) Data frame received for 3\nI0416 23:48:49.850727 313 log.go:172] (0xc0009b6000) (3) Data frame handling\nI0416 23:48:49.852388 313 log.go:172] (0xc0009ec0b0) Data frame received for 1\nI0416 23:48:49.852405 313 log.go:172] (0xc000502b40) (1) Data frame handling\nI0416 23:48:49.852417 313 log.go:172] (0xc000502b40) (1) Data frame sent\nI0416 23:48:49.852429 313 log.go:172] (0xc0009ec0b0) (0xc000502b40) Stream removed, broadcasting: 1\nI0416 23:48:49.852445 313 log.go:172] (0xc0009ec0b0) Go away received\nI0416 23:48:49.852758 313 log.go:172] (0xc0009ec0b0) (0xc000502b40) Stream removed, broadcasting: 1\nI0416 23:48:49.852778 313 log.go:172] (0xc0009ec0b0) (0xc0009b6000) Stream removed, broadcasting: 3\nI0416 23:48:49.852786 313 log.go:172] (0xc0009ec0b0) (0xc0008ee000) Stream removed, broadcasting: 5\n" Apr 16 23:48:49.857: INFO: stdout: "" Apr 16 23:48:49.857: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-389 execpodcckbb -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 32661' Apr 16 23:48:50.081: INFO: stderr: "I0416 23:48:49.991417 333 log.go:172] (0xc0003c66e0) (0xc000b3a1e0) Create stream\nI0416 23:48:49.991495 333 log.go:172] (0xc0003c66e0) (0xc000b3a1e0) Stream added, broadcasting: 1\nI0416 23:48:50.008995 333 log.go:172] (0xc0003c66e0) Reply frame received for 1\nI0416 23:48:50.009069 333 log.go:172] (0xc0003c66e0) (0xc000a9c140) Create stream\nI0416 23:48:50.009095 333 log.go:172] (0xc0003c66e0) (0xc000a9c140) Stream added, broadcasting: 3\nI0416 23:48:50.010678 333 log.go:172] (0xc0003c66e0) Reply frame received for 3\nI0416 23:48:50.010735 333 log.go:172] (0xc0003c66e0) (0xc0005ad720) Create stream\nI0416 23:48:50.010752 333 log.go:172] (0xc0003c66e0) (0xc0005ad720) Stream added, broadcasting: 5\nI0416 23:48:50.011656 333 log.go:172] (0xc0003c66e0) Reply frame received for 5\nI0416 23:48:50.074050 333 log.go:172] (0xc0003c66e0) Data frame received for 3\nI0416 23:48:50.074088 333 log.go:172] (0xc000a9c140) (3) Data frame handling\nI0416 23:48:50.074145 333 log.go:172] (0xc0003c66e0) Data frame received for 5\nI0416 23:48:50.074191 333 log.go:172] (0xc0005ad720) (5) Data frame handling\nI0416 23:48:50.074253 333 log.go:172] (0xc0005ad720) (5) Data frame sent\nI0416 23:48:50.074276 333 log.go:172] (0xc0003c66e0) Data frame received for 5\nI0416 23:48:50.074293 333 log.go:172] (0xc0005ad720) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 32661\nConnection to 172.17.0.12 32661 port [tcp/32661] succeeded!\nI0416 23:48:50.076290 333 log.go:172] (0xc0003c66e0) Data frame received for 1\nI0416 23:48:50.076331 333 log.go:172] (0xc000b3a1e0) (1) Data frame handling\nI0416 23:48:50.076359 333 log.go:172] (0xc000b3a1e0) (1) Data frame sent\nI0416 23:48:50.076384 333 log.go:172] (0xc0003c66e0) (0xc000b3a1e0) Stream removed, broadcasting: 1\nI0416 23:48:50.076401 333 log.go:172] (0xc0003c66e0) Go away received\nI0416 23:48:50.076936 333 log.go:172] (0xc0003c66e0) (0xc000b3a1e0) Stream removed, broadcasting: 1\nI0416 23:48:50.076962 333 log.go:172] (0xc0003c66e0) (0xc000a9c140) Stream removed, broadcasting: 3\nI0416 23:48:50.076974 333 log.go:172] (0xc0003c66e0) (0xc0005ad720) Stream removed, broadcasting: 5\n" Apr 16 23:48:50.081: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:48:50.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-389" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:14.730 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":34,"skipped":634,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:48:50.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name projected-secret-test-8bd33c3a-718a-4728-a558-f27dfc46083b STEP: Creating a pod to test consume secrets Apr 16 23:48:50.177: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ee6b5347-61f6-4d8d-b07c-4381d2f1a356" in namespace "projected-728" to be "Succeeded or Failed" Apr 16 23:48:50.195: INFO: Pod "pod-projected-secrets-ee6b5347-61f6-4d8d-b07c-4381d2f1a356": Phase="Pending", Reason="", readiness=false. Elapsed: 17.407852ms Apr 16 23:48:52.203: INFO: Pod "pod-projected-secrets-ee6b5347-61f6-4d8d-b07c-4381d2f1a356": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025554077s Apr 16 23:48:54.259: INFO: Pod "pod-projected-secrets-ee6b5347-61f6-4d8d-b07c-4381d2f1a356": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082107101s STEP: Saw pod success Apr 16 23:48:54.259: INFO: Pod "pod-projected-secrets-ee6b5347-61f6-4d8d-b07c-4381d2f1a356" satisfied condition "Succeeded or Failed" Apr 16 23:48:54.262: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-ee6b5347-61f6-4d8d-b07c-4381d2f1a356 container secret-volume-test: STEP: delete the pod Apr 16 23:48:54.423: INFO: Waiting for pod pod-projected-secrets-ee6b5347-61f6-4d8d-b07c-4381d2f1a356 to disappear Apr 16 23:48:54.439: INFO: Pod pod-projected-secrets-ee6b5347-61f6-4d8d-b07c-4381d2f1a356 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:48:54.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-728" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":35,"skipped":643,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:48:54.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 16 23:48:54.943: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 16 23:48:56.954: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722677734, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722677734, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722677734, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722677734, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 16 23:48:59.995: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 16 23:48:59.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2368-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:49:01.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1770" for this suite. STEP: Destroying namespace "webhook-1770-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.758 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":36,"skipped":726,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:49:01.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7645 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-7645 I0416 23:49:01.375270 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7645, replica count: 2 I0416 23:49:04.425726 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0416 23:49:07.425945 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 16 23:49:07.425: INFO: Creating new exec pod Apr 16 23:49:12.443: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-7645 execpod4kfqs -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 16 23:49:12.688: INFO: stderr: "I0416 23:49:12.590475 356 log.go:172] (0xc0000ea790) (0xc000412b40) Create stream\nI0416 23:49:12.590566 356 log.go:172] (0xc0000ea790) (0xc000412b40) Stream added, broadcasting: 1\nI0416 23:49:12.594209 356 log.go:172] (0xc0000ea790) Reply frame received for 1\nI0416 23:49:12.594288 356 log.go:172] (0xc0000ea790) (0xc0006ab2c0) Create stream\nI0416 23:49:12.594333 356 log.go:172] (0xc0000ea790) (0xc0006ab2c0) Stream added, broadcasting: 3\nI0416 23:49:12.595554 356 log.go:172] (0xc0000ea790) Reply frame received for 3\nI0416 23:49:12.595608 356 log.go:172] (0xc0000ea790) (0xc000a3c000) Create stream\nI0416 23:49:12.595626 356 log.go:172] (0xc0000ea790) (0xc000a3c000) Stream added, broadcasting: 5\nI0416 23:49:12.596805 356 log.go:172] (0xc0000ea790) Reply frame received for 5\nI0416 23:49:12.681951 356 log.go:172] (0xc0000ea790) Data frame received for 5\nI0416 23:49:12.681975 356 log.go:172] (0xc000a3c000) (5) Data frame handling\nI0416 23:49:12.681992 356 log.go:172] (0xc000a3c000) (5) Data frame sent\nI0416 23:49:12.682001 356 log.go:172] (0xc0000ea790) Data frame received for 5\nI0416 23:49:12.682008 356 log.go:172] (0xc000a3c000) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0416 23:49:12.682021 356 log.go:172] (0xc000a3c000) (5) Data frame sent\nI0416 23:49:12.682092 356 log.go:172] (0xc0000ea790) Data frame received for 5\nI0416 23:49:12.682116 356 log.go:172] (0xc000a3c000) (5) Data frame handling\nI0416 23:49:12.682386 356 log.go:172] (0xc0000ea790) Data frame received for 3\nI0416 23:49:12.682400 356 log.go:172] (0xc0006ab2c0) (3) Data frame handling\nI0416 23:49:12.684223 356 log.go:172] (0xc0000ea790) Data frame received for 1\nI0416 23:49:12.684250 356 log.go:172] (0xc000412b40) (1) Data frame handling\nI0416 23:49:12.684261 356 log.go:172] (0xc000412b40) (1) Data frame sent\nI0416 23:49:12.684369 356 log.go:172] (0xc0000ea790) (0xc000412b40) Stream removed, broadcasting: 1\nI0416 23:49:12.684486 356 log.go:172] (0xc0000ea790) Go away received\nI0416 23:49:12.684601 356 log.go:172] (0xc0000ea790) (0xc000412b40) Stream removed, broadcasting: 1\nI0416 23:49:12.684612 356 log.go:172] (0xc0000ea790) (0xc0006ab2c0) Stream removed, broadcasting: 3\nI0416 23:49:12.684618 356 log.go:172] (0xc0000ea790) (0xc000a3c000) Stream removed, broadcasting: 5\n" Apr 16 23:49:12.689: INFO: stdout: "" Apr 16 23:49:12.690: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-7645 execpod4kfqs -- /bin/sh -x -c nc -zv -t -w 2 10.96.159.243 80' Apr 16 23:49:12.905: INFO: stderr: "I0416 23:49:12.812212 379 log.go:172] (0xc000b52000) (0xc000b18000) Create stream\nI0416 23:49:12.812299 379 log.go:172] (0xc000b52000) (0xc000b18000) Stream added, broadcasting: 1\nI0416 23:49:12.816124 379 log.go:172] (0xc000b52000) Reply frame received for 1\nI0416 23:49:12.816162 379 log.go:172] (0xc000b52000) (0xc000b180a0) Create stream\nI0416 23:49:12.816174 379 log.go:172] (0xc000b52000) (0xc000b180a0) Stream added, broadcasting: 3\nI0416 23:49:12.817084 379 log.go:172] (0xc000b52000) Reply frame received for 3\nI0416 23:49:12.817250 379 log.go:172] (0xc000b52000) (0xc000b183c0) Create stream\nI0416 23:49:12.817267 379 log.go:172] (0xc000b52000) (0xc000b183c0) Stream added, broadcasting: 5\nI0416 23:49:12.818219 379 log.go:172] (0xc000b52000) Reply frame received for 5\nI0416 23:49:12.897365 379 log.go:172] (0xc000b52000) Data frame received for 5\nI0416 23:49:12.897391 379 log.go:172] (0xc000b183c0) (5) Data frame handling\nI0416 23:49:12.897407 379 log.go:172] (0xc000b183c0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.159.243 80\nConnection to 10.96.159.243 80 port [tcp/http] succeeded!\nI0416 23:49:12.897484 379 log.go:172] (0xc000b52000) Data frame received for 3\nI0416 23:49:12.897532 379 log.go:172] (0xc000b180a0) (3) Data frame handling\nI0416 23:49:12.897704 379 log.go:172] (0xc000b52000) Data frame received for 5\nI0416 23:49:12.897740 379 log.go:172] (0xc000b183c0) (5) Data frame handling\nI0416 23:49:12.899714 379 log.go:172] (0xc000b52000) Data frame received for 1\nI0416 23:49:12.899759 379 log.go:172] (0xc000b18000) (1) Data frame handling\nI0416 23:49:12.899798 379 log.go:172] (0xc000b18000) (1) Data frame sent\nI0416 23:49:12.899818 379 log.go:172] (0xc000b52000) (0xc000b18000) Stream removed, broadcasting: 1\nI0416 23:49:12.899850 379 log.go:172] (0xc000b52000) Go away received\nI0416 23:49:12.900323 379 log.go:172] (0xc000b52000) (0xc000b18000) Stream removed, broadcasting: 1\nI0416 23:49:12.900346 379 log.go:172] (0xc000b52000) (0xc000b180a0) Stream removed, broadcasting: 3\nI0416 23:49:12.900364 379 log.go:172] (0xc000b52000) (0xc000b183c0) Stream removed, broadcasting: 5\n" Apr 16 23:49:12.905: INFO: stdout: "" Apr 16 23:49:12.905: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-7645 execpod4kfqs -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30113' Apr 16 23:49:13.120: INFO: stderr: "I0416 23:49:13.036052 401 log.go:172] (0xc0009a8790) (0xc000376000) Create stream\nI0416 23:49:13.036111 401 log.go:172] (0xc0009a8790) (0xc000376000) Stream added, broadcasting: 1\nI0416 23:49:13.039109 401 log.go:172] (0xc0009a8790) Reply frame received for 1\nI0416 23:49:13.039150 401 log.go:172] (0xc0009a8790) (0xc000376140) Create stream\nI0416 23:49:13.039159 401 log.go:172] (0xc0009a8790) (0xc000376140) Stream added, broadcasting: 3\nI0416 23:49:13.040129 401 log.go:172] (0xc0009a8790) Reply frame received for 3\nI0416 23:49:13.040168 401 log.go:172] (0xc0009a8790) (0xc0003761e0) Create stream\nI0416 23:49:13.040183 401 log.go:172] (0xc0009a8790) (0xc0003761e0) Stream added, broadcasting: 5\nI0416 23:49:13.041225 401 log.go:172] (0xc0009a8790) Reply frame received for 5\nI0416 23:49:13.113037 401 log.go:172] (0xc0009a8790) Data frame received for 3\nI0416 23:49:13.113085 401 log.go:172] (0xc000376140) (3) Data frame handling\nI0416 23:49:13.113210 401 log.go:172] (0xc0009a8790) Data frame received for 5\nI0416 23:49:13.113228 401 log.go:172] (0xc0003761e0) (5) Data frame handling\nI0416 23:49:13.113248 401 log.go:172] (0xc0003761e0) (5) Data frame sent\nI0416 23:49:13.113259 401 log.go:172] (0xc0009a8790) Data frame received for 5\nI0416 23:49:13.113266 401 log.go:172] (0xc0003761e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 30113\nConnection to 172.17.0.13 30113 port [tcp/30113] succeeded!\nI0416 23:49:13.114606 401 log.go:172] (0xc0009a8790) Data frame received for 1\nI0416 23:49:13.114642 401 log.go:172] (0xc000376000) (1) Data frame handling\nI0416 23:49:13.114665 401 log.go:172] (0xc000376000) (1) Data frame sent\nI0416 23:49:13.114685 401 log.go:172] (0xc0009a8790) (0xc000376000) Stream removed, broadcasting: 1\nI0416 23:49:13.114710 401 log.go:172] (0xc0009a8790) Go away received\nI0416 23:49:13.115185 401 log.go:172] (0xc0009a8790) (0xc000376000) Stream removed, broadcasting: 1\nI0416 23:49:13.115209 401 log.go:172] (0xc0009a8790) (0xc000376140) Stream removed, broadcasting: 3\nI0416 23:49:13.115222 401 log.go:172] (0xc0009a8790) (0xc0003761e0) Stream removed, broadcasting: 5\n" Apr 16 23:49:13.120: INFO: stdout: "" Apr 16 23:49:13.120: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-7645 execpod4kfqs -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30113' Apr 16 23:49:13.576: INFO: stderr: "I0416 23:49:13.496921 422 log.go:172] (0xc0009540b0) (0xc0006e5220) Create stream\nI0416 23:49:13.496995 422 log.go:172] (0xc0009540b0) (0xc0006e5220) Stream added, broadcasting: 1\nI0416 23:49:13.499771 422 log.go:172] (0xc0009540b0) Reply frame received for 1\nI0416 23:49:13.499820 422 log.go:172] (0xc0009540b0) (0xc000932000) Create stream\nI0416 23:49:13.499835 422 log.go:172] (0xc0009540b0) (0xc000932000) Stream added, broadcasting: 3\nI0416 23:49:13.500845 422 log.go:172] (0xc0009540b0) Reply frame received for 3\nI0416 23:49:13.500872 422 log.go:172] (0xc0009540b0) (0xc0006e54a0) Create stream\nI0416 23:49:13.500883 422 log.go:172] (0xc0009540b0) (0xc0006e54a0) Stream added, broadcasting: 5\nI0416 23:49:13.502004 422 log.go:172] (0xc0009540b0) Reply frame received for 5\nI0416 23:49:13.568338 422 log.go:172] (0xc0009540b0) Data frame received for 3\nI0416 23:49:13.568377 422 log.go:172] (0xc000932000) (3) Data frame handling\nI0416 23:49:13.568421 422 log.go:172] (0xc0009540b0) Data frame received for 5\nI0416 23:49:13.568441 422 log.go:172] (0xc0006e54a0) (5) Data frame handling\nI0416 23:49:13.568465 422 log.go:172] (0xc0006e54a0) (5) Data frame sent\nI0416 23:49:13.568480 422 log.go:172] (0xc0009540b0) Data frame received for 5\nI0416 23:49:13.568494 422 log.go:172] (0xc0006e54a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 30113\nConnection to 172.17.0.12 30113 port [tcp/30113] succeeded!\nI0416 23:49:13.570225 422 log.go:172] (0xc0009540b0) Data frame received for 1\nI0416 23:49:13.570257 422 log.go:172] (0xc0006e5220) (1) Data frame handling\nI0416 23:49:13.570275 422 log.go:172] (0xc0006e5220) (1) Data frame sent\nI0416 23:49:13.570309 422 log.go:172] (0xc0009540b0) (0xc0006e5220) Stream removed, broadcasting: 1\nI0416 23:49:13.570528 422 log.go:172] (0xc0009540b0) Go away received\nI0416 23:49:13.570763 422 log.go:172] (0xc0009540b0) (0xc0006e5220) Stream removed, broadcasting: 1\nI0416 23:49:13.570794 422 log.go:172] (0xc0009540b0) (0xc000932000) Stream removed, broadcasting: 3\nI0416 23:49:13.570826 422 log.go:172] (0xc0009540b0) (0xc0006e54a0) Stream removed, broadcasting: 5\n" Apr 16 23:49:13.576: INFO: stdout: "" Apr 16 23:49:13.576: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:49:13.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7645" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:12.432 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":37,"skipped":751,"failed":0} SS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:49:13.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 16 23:49:13.725: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 23:49:13.730: INFO: Number of nodes with available pods: 0 Apr 16 23:49:13.730: INFO: Node latest-worker is running more than one daemon pod Apr 16 23:49:14.735: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 23:49:14.737: INFO: Number of nodes with available pods: 0 Apr 16 23:49:14.737: INFO: Node latest-worker is running more than one daemon pod Apr 16 23:49:15.877: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 23:49:15.899: INFO: Number of nodes with available pods: 0 Apr 16 23:49:15.899: INFO: Node latest-worker is running more than one daemon pod Apr 16 23:49:16.750: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 23:49:16.753: INFO: Number of nodes with available pods: 0 Apr 16 23:49:16.753: INFO: Node latest-worker is running more than one daemon pod Apr 16 23:49:17.735: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 23:49:17.738: INFO: Number of nodes with available pods: 2 Apr 16 23:49:17.738: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 16 23:49:17.760: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 23:49:17.773: INFO: Number of nodes with available pods: 2 Apr 16 23:49:17.773: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7983, will wait for the garbage collector to delete the pods Apr 16 23:49:18.943: INFO: Deleting DaemonSet.extensions daemon-set took: 4.722734ms Apr 16 23:49:19.143: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.242554ms Apr 16 23:49:33.045: INFO: Number of nodes with available pods: 0 Apr 16 23:49:33.045: INFO: Number of running nodes: 0, number of available pods: 0 Apr 16 23:49:33.048: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7983/daemonsets","resourceVersion":"8658711"},"items":null} Apr 16 23:49:33.050: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7983/pods","resourceVersion":"8658711"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:49:33.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7983" for this suite. • [SLOW TEST:19.423 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":38,"skipped":753,"failed":0} SSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:49:33.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1997.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1997.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1997.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1997.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1997.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1997.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1997.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1997.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1997.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1997.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1997.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 47.208.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.208.47_udp@PTR;check="$$(dig +tcp +noall +answer +search 47.208.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.208.47_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1997.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1997.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1997.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1997.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1997.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1997.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1997.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1997.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1997.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1997.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1997.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 47.208.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.208.47_udp@PTR;check="$$(dig +tcp +noall +answer +search 47.208.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.208.47_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 16 23:49:39.266: INFO: Unable to read wheezy_udp@dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:49:39.269: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:49:39.272: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:49:39.275: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:49:39.294: INFO: Unable to read jessie_udp@dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:49:39.297: INFO: Unable to read jessie_tcp@dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:49:39.300: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:49:39.302: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:49:39.321: INFO: Lookups using dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00 failed for: [wheezy_udp@dns-test-service.dns-1997.svc.cluster.local wheezy_tcp@dns-test-service.dns-1997.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local jessie_udp@dns-test-service.dns-1997.svc.cluster.local jessie_tcp@dns-test-service.dns-1997.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local] Apr 16 23:49:44.326: INFO: Unable to read wheezy_udp@dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:49:44.333: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:49:44.337: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:49:44.339: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:49:44.354: INFO: Unable to read jessie_udp@dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:49:44.357: INFO: Unable to read jessie_tcp@dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:49:44.359: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:49:44.362: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:49:44.379: INFO: Lookups using dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00 failed for: [wheezy_udp@dns-test-service.dns-1997.svc.cluster.local wheezy_tcp@dns-test-service.dns-1997.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local jessie_udp@dns-test-service.dns-1997.svc.cluster.local jessie_tcp@dns-test-service.dns-1997.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local] Apr 16 23:49:49.326: INFO: Unable to read wheezy_udp@dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:49:49.328: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:49:49.331: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:49:49.346: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:49:49.370: INFO: Unable to read jessie_udp@dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:49:49.373: INFO: Unable to read jessie_tcp@dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:49:49.376: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:49:49.379: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:49:49.397: INFO: Lookups using dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00 failed for: [wheezy_udp@dns-test-service.dns-1997.svc.cluster.local wheezy_tcp@dns-test-service.dns-1997.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local jessie_udp@dns-test-service.dns-1997.svc.cluster.local jessie_tcp@dns-test-service.dns-1997.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local] Apr 16 23:49:54.326: INFO: Unable to read wheezy_udp@dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:49:54.329: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:49:54.333: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:49:54.335: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:49:54.354: INFO: Unable to read jessie_udp@dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:49:54.357: INFO: Unable to read jessie_tcp@dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:49:54.360: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:49:54.362: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:49:54.379: INFO: Lookups using dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00 failed for: [wheezy_udp@dns-test-service.dns-1997.svc.cluster.local wheezy_tcp@dns-test-service.dns-1997.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local jessie_udp@dns-test-service.dns-1997.svc.cluster.local jessie_tcp@dns-test-service.dns-1997.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local] Apr 16 23:49:59.326: INFO: Unable to read wheezy_udp@dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:49:59.330: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:49:59.334: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:49:59.337: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:49:59.359: INFO: Unable to read jessie_udp@dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:49:59.362: INFO: Unable to read jessie_tcp@dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:49:59.365: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:49:59.368: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:49:59.386: INFO: Lookups using dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00 failed for: [wheezy_udp@dns-test-service.dns-1997.svc.cluster.local wheezy_tcp@dns-test-service.dns-1997.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local jessie_udp@dns-test-service.dns-1997.svc.cluster.local jessie_tcp@dns-test-service.dns-1997.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local] Apr 16 23:50:04.346: INFO: Unable to read wheezy_udp@dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:50:04.350: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:50:04.353: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:50:04.356: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:50:04.375: INFO: Unable to read jessie_udp@dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:50:04.378: INFO: Unable to read jessie_tcp@dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:50:04.380: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:50:04.383: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local from pod dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00: the server could not find the requested resource (get pods dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00) Apr 16 23:50:04.399: INFO: Lookups using dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00 failed for: [wheezy_udp@dns-test-service.dns-1997.svc.cluster.local wheezy_tcp@dns-test-service.dns-1997.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local jessie_udp@dns-test-service.dns-1997.svc.cluster.local jessie_tcp@dns-test-service.dns-1997.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1997.svc.cluster.local] Apr 16 23:50:09.382: INFO: DNS probes using dns-1997/dns-test-3a472ca0-9f17-42fb-8794-470223ca4f00 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:50:09.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1997" for this suite. • [SLOW TEST:36.948 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":275,"completed":39,"skipped":756,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:50:10.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 16 23:50:18.149: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 16 23:50:18.153: INFO: Pod pod-with-prestop-http-hook still exists Apr 16 23:50:20.153: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 16 23:50:20.183: INFO: Pod pod-with-prestop-http-hook still exists Apr 16 23:50:22.153: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 16 23:50:22.157: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:50:22.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9787" for this suite. • [SLOW TEST:12.170 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":40,"skipped":775,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:50:22.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-4334 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Apr 16 23:50:22.476: INFO: Found 0 stateful pods, waiting for 3 Apr 16 23:50:32.481: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 16 23:50:32.481: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 16 23:50:32.481: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Apr 16 23:50:42.481: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 16 23:50:42.481: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 16 23:50:42.481: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 16 23:50:42.508: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Apr 16 23:50:52.588: INFO: Updating stateful set ss2 Apr 16 23:50:52.625: INFO: Waiting for Pod statefulset-4334/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Apr 16 23:51:02.791: INFO: Found 2 stateful pods, waiting for 3 Apr 16 23:51:12.796: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 16 23:51:12.796: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 16 23:51:12.796: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Apr 16 23:51:12.819: INFO: Updating stateful set ss2 Apr 16 23:51:12.829: INFO: Waiting for Pod statefulset-4334/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 16 23:51:22.855: INFO: Updating stateful set ss2 Apr 16 23:51:22.865: INFO: Waiting for StatefulSet statefulset-4334/ss2 to complete update Apr 16 23:51:22.865: INFO: Waiting for Pod statefulset-4334/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 16 23:51:32.874: INFO: Deleting all statefulset in ns statefulset-4334 Apr 16 23:51:32.877: INFO: Scaling statefulset ss2 to 0 Apr 16 23:51:42.898: INFO: Waiting for statefulset status.replicas updated to 0 Apr 16 23:51:42.901: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:51:42.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4334" for this suite. • [SLOW TEST:80.734 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":41,"skipped":812,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:51:42.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-2c5dbcb5-5be2-4b2c-95b6-715c51ae24cb STEP: Creating a pod to test consume secrets Apr 16 23:51:42.980: INFO: Waiting up to 5m0s for pod "pod-secrets-c297e975-47b4-4f8a-9dd7-2302bef65440" in namespace "secrets-7572" to be "Succeeded or Failed" Apr 16 23:51:42.984: INFO: Pod "pod-secrets-c297e975-47b4-4f8a-9dd7-2302bef65440": Phase="Pending", Reason="", readiness=false. Elapsed: 4.35524ms Apr 16 23:51:44.988: INFO: Pod "pod-secrets-c297e975-47b4-4f8a-9dd7-2302bef65440": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007684654s Apr 16 23:51:46.992: INFO: Pod "pod-secrets-c297e975-47b4-4f8a-9dd7-2302bef65440": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012068138s STEP: Saw pod success Apr 16 23:51:46.992: INFO: Pod "pod-secrets-c297e975-47b4-4f8a-9dd7-2302bef65440" satisfied condition "Succeeded or Failed" Apr 16 23:51:46.995: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-c297e975-47b4-4f8a-9dd7-2302bef65440 container secret-volume-test: STEP: delete the pod Apr 16 23:51:47.029: INFO: Waiting for pod pod-secrets-c297e975-47b4-4f8a-9dd7-2302bef65440 to disappear Apr 16 23:51:47.033: INFO: Pod pod-secrets-c297e975-47b4-4f8a-9dd7-2302bef65440 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:51:47.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7572" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":42,"skipped":852,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:51:47.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-b5bc5a17-a41f-4442-a7fd-7a10a8f926cf STEP: Creating a pod to test consume secrets Apr 16 23:51:47.119: INFO: Waiting up to 5m0s for pod "pod-secrets-27fdcf97-9693-4cdf-87cb-e8eb45597050" in namespace "secrets-2930" to be "Succeeded or Failed" Apr 16 23:51:47.136: INFO: Pod "pod-secrets-27fdcf97-9693-4cdf-87cb-e8eb45597050": Phase="Pending", Reason="", readiness=false. Elapsed: 16.634677ms Apr 16 23:51:49.213: INFO: Pod "pod-secrets-27fdcf97-9693-4cdf-87cb-e8eb45597050": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093750598s Apr 16 23:51:51.218: INFO: Pod "pod-secrets-27fdcf97-9693-4cdf-87cb-e8eb45597050": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.098272609s STEP: Saw pod success Apr 16 23:51:51.218: INFO: Pod "pod-secrets-27fdcf97-9693-4cdf-87cb-e8eb45597050" satisfied condition "Succeeded or Failed" Apr 16 23:51:51.220: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-27fdcf97-9693-4cdf-87cb-e8eb45597050 container secret-volume-test: STEP: delete the pod Apr 16 23:51:51.235: INFO: Waiting for pod pod-secrets-27fdcf97-9693-4cdf-87cb-e8eb45597050 to disappear Apr 16 23:51:51.264: INFO: Pod pod-secrets-27fdcf97-9693-4cdf-87cb-e8eb45597050 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:51:51.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2930" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":43,"skipped":863,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:51:51.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206 STEP: creating the pod Apr 16 23:51:51.353: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1372' Apr 16 23:51:51.663: INFO: stderr: "" Apr 16 23:51:51.663: INFO: stdout: "pod/pause created\n" Apr 16 23:51:51.663: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 16 23:51:51.663: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1372" to be "running and ready" Apr 16 23:51:51.710: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 46.715043ms Apr 16 23:51:53.714: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05020987s Apr 16 23:51:55.717: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.053710949s Apr 16 23:51:55.717: INFO: Pod "pause" satisfied condition "running and ready" Apr 16 23:51:55.717: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: adding the label testing-label with value testing-label-value to a pod Apr 16 23:51:55.717: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-1372' Apr 16 23:51:55.823: INFO: stderr: "" Apr 16 23:51:55.823: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Apr 16 23:51:55.823: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1372' Apr 16 23:51:55.909: INFO: stderr: "" Apr 16 23:51:55.909: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Apr 16 23:51:55.909: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-1372' Apr 16 23:51:56.036: INFO: stderr: "" Apr 16 23:51:56.036: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Apr 16 23:51:56.036: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1372' Apr 16 23:51:56.125: INFO: stderr: "" Apr 16 23:51:56.125: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213 STEP: using delete to clean up resources Apr 16 23:51:56.125: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1372' Apr 16 23:51:56.232: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 16 23:51:56.232: INFO: stdout: "pod \"pause\" force deleted\n" Apr 16 23:51:56.232: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-1372' Apr 16 23:51:56.377: INFO: stderr: "No resources found in kubectl-1372 namespace.\n" Apr 16 23:51:56.377: INFO: stdout: "" Apr 16 23:51:56.378: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-1372 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 16 23:51:56.529: INFO: stderr: "" Apr 16 23:51:56.529: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:51:56.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1372" for this suite. • [SLOW TEST:5.264 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1203 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":275,"completed":44,"skipped":887,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:51:56.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 16 23:51:56.797: INFO: Waiting up to 5m0s for pod "pod-18591b14-a9ec-4a12-aad6-8a2a237b86e1" in namespace "emptydir-2949" to be "Succeeded or Failed" Apr 16 23:51:56.805: INFO: Pod "pod-18591b14-a9ec-4a12-aad6-8a2a237b86e1": Phase="Pending", Reason="", readiness=false. Elapsed: 7.529341ms Apr 16 23:51:58.810: INFO: Pod "pod-18591b14-a9ec-4a12-aad6-8a2a237b86e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012984016s Apr 16 23:52:00.815: INFO: Pod "pod-18591b14-a9ec-4a12-aad6-8a2a237b86e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017349045s STEP: Saw pod success Apr 16 23:52:00.815: INFO: Pod "pod-18591b14-a9ec-4a12-aad6-8a2a237b86e1" satisfied condition "Succeeded or Failed" Apr 16 23:52:00.818: INFO: Trying to get logs from node latest-worker pod pod-18591b14-a9ec-4a12-aad6-8a2a237b86e1 container test-container: STEP: delete the pod Apr 16 23:52:00.852: INFO: Waiting for pod pod-18591b14-a9ec-4a12-aad6-8a2a237b86e1 to disappear Apr 16 23:52:00.854: INFO: Pod pod-18591b14-a9ec-4a12-aad6-8a2a237b86e1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:52:00.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2949" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":45,"skipped":890,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:52:00.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 16 23:52:01.620: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 16 23:52:03.629: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722677921, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722677921, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722677921, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722677921, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 16 23:52:06.664: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:52:07.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6372" for this suite. STEP: Destroying namespace "webhook-6372-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.305 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":46,"skipped":917,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:52:07.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with configMap that has name projected-configmap-test-upd-5a870a83-0acb-43e4-b546-ce3641c5e9b6 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-5a870a83-0acb-43e4-b546-ce3641c5e9b6 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:52:13.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2980" for this suite. • [SLOW TEST:6.108 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":47,"skipped":948,"failed":0} [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:52:13.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod test-webserver-4d5f6bbd-e7b7-491b-b67f-ffb97646820e in namespace container-probe-7212 Apr 16 23:52:17.380: INFO: Started pod test-webserver-4d5f6bbd-e7b7-491b-b67f-ffb97646820e in namespace container-probe-7212 STEP: checking the pod's current state and verifying that restartCount is present Apr 16 23:52:17.383: INFO: Initial restart count of pod test-webserver-4d5f6bbd-e7b7-491b-b67f-ffb97646820e is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:56:18.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7212" for this suite. • [SLOW TEST:245.075 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":48,"skipped":948,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:56:18.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 16 23:56:19.331: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 16 23:56:21.369: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722678179, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722678179, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722678179, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722678179, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 16 23:56:24.424: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:56:24.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6183" for this suite. STEP: Destroying namespace "webhook-6183-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.625 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":49,"skipped":975,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:56:24.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service endpoint-test2 in namespace services-2089 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2089 to expose endpoints map[] Apr 16 23:56:25.145: INFO: successfully validated that service endpoint-test2 in namespace services-2089 exposes endpoints map[] (32.234598ms elapsed) STEP: Creating pod pod1 in namespace services-2089 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2089 to expose endpoints map[pod1:[80]] Apr 16 23:56:28.250: INFO: successfully validated that service endpoint-test2 in namespace services-2089 exposes endpoints map[pod1:[80]] (3.098792482s elapsed) STEP: Creating pod pod2 in namespace services-2089 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2089 to expose endpoints map[pod1:[80] pod2:[80]] Apr 16 23:56:31.353: INFO: successfully validated that service endpoint-test2 in namespace services-2089 exposes endpoints map[pod1:[80] pod2:[80]] (3.098854259s elapsed) STEP: Deleting pod pod1 in namespace services-2089 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2089 to expose endpoints map[pod2:[80]] Apr 16 23:56:32.414: INFO: successfully validated that service endpoint-test2 in namespace services-2089 exposes endpoints map[pod2:[80]] (1.047738647s elapsed) STEP: Deleting pod pod2 in namespace services-2089 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2089 to expose endpoints map[] Apr 16 23:56:33.445: INFO: successfully validated that service endpoint-test2 in namespace services-2089 exposes endpoints map[] (1.027013448s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:56:33.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2089" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:8.503 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":275,"completed":50,"skipped":987,"failed":0} SSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:56:33.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 16 23:56:33.666: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"230133c8-e6dc-4983-a986-f5cc8120ae2b", Controller:(*bool)(0xc0027cad1a), BlockOwnerDeletion:(*bool)(0xc0027cad1b)}} Apr 16 23:56:33.694: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"74c53c56-f32c-4546-9010-be8ee2c97c94", Controller:(*bool)(0xc0024e8fba), BlockOwnerDeletion:(*bool)(0xc0024e8fbb)}} Apr 16 23:56:33.702: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"eb5e148a-43eb-4e67-a89a-dd1f6f8e9bef", Controller:(*bool)(0xc002d0197a), BlockOwnerDeletion:(*bool)(0xc002d0197b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:56:38.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1744" for this suite. • [SLOW TEST:5.271 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":51,"skipped":990,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:56:38.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-b4cc47e8-0322-41d7-852c-0e2d257d21c0 STEP: Creating a pod to test consume secrets Apr 16 23:56:38.858: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d0a44d89-a3e9-447f-b7cc-3895f7ddbf8b" in namespace "projected-7685" to be "Succeeded or Failed" Apr 16 23:56:38.863: INFO: Pod "pod-projected-secrets-d0a44d89-a3e9-447f-b7cc-3895f7ddbf8b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.636071ms Apr 16 23:56:40.867: INFO: Pod "pod-projected-secrets-d0a44d89-a3e9-447f-b7cc-3895f7ddbf8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009205253s Apr 16 23:56:42.919: INFO: Pod "pod-projected-secrets-d0a44d89-a3e9-447f-b7cc-3895f7ddbf8b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061143064s Apr 16 23:56:44.922: INFO: Pod "pod-projected-secrets-d0a44d89-a3e9-447f-b7cc-3895f7ddbf8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.064543831s STEP: Saw pod success Apr 16 23:56:44.922: INFO: Pod "pod-projected-secrets-d0a44d89-a3e9-447f-b7cc-3895f7ddbf8b" satisfied condition "Succeeded or Failed" Apr 16 23:56:44.925: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-d0a44d89-a3e9-447f-b7cc-3895f7ddbf8b container projected-secret-volume-test: STEP: delete the pod Apr 16 23:56:45.024: INFO: Waiting for pod pod-projected-secrets-d0a44d89-a3e9-447f-b7cc-3895f7ddbf8b to disappear Apr 16 23:56:45.031: INFO: Pod pod-projected-secrets-d0a44d89-a3e9-447f-b7cc-3895f7ddbf8b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:56:45.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7685" for this suite. • [SLOW TEST:6.287 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":52,"skipped":998,"failed":0} SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:56:45.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-95e6bab9-ff3a-413d-9ba7-30ecba1a3980 STEP: Creating a pod to test consume secrets Apr 16 23:56:45.181: INFO: Waiting up to 5m0s for pod "pod-secrets-d1a5b240-2ed3-4d03-aac6-bd0bfc710ef4" in namespace "secrets-4575" to be "Succeeded or Failed" Apr 16 23:56:45.271: INFO: Pod "pod-secrets-d1a5b240-2ed3-4d03-aac6-bd0bfc710ef4": Phase="Pending", Reason="", readiness=false. Elapsed: 89.735122ms Apr 16 23:56:47.274: INFO: Pod "pod-secrets-d1a5b240-2ed3-4d03-aac6-bd0bfc710ef4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093412831s Apr 16 23:56:49.278: INFO: Pod "pod-secrets-d1a5b240-2ed3-4d03-aac6-bd0bfc710ef4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.096933353s STEP: Saw pod success Apr 16 23:56:49.278: INFO: Pod "pod-secrets-d1a5b240-2ed3-4d03-aac6-bd0bfc710ef4" satisfied condition "Succeeded or Failed" Apr 16 23:56:49.281: INFO: Trying to get logs from node latest-worker pod pod-secrets-d1a5b240-2ed3-4d03-aac6-bd0bfc710ef4 container secret-volume-test: STEP: delete the pod Apr 16 23:56:49.326: INFO: Waiting for pod pod-secrets-d1a5b240-2ed3-4d03-aac6-bd0bfc710ef4 to disappear Apr 16 23:56:49.331: INFO: Pod pod-secrets-d1a5b240-2ed3-4d03-aac6-bd0bfc710ef4 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:56:49.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4575" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":53,"skipped":1000,"failed":0} SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:56:49.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 16 23:56:49.491: INFO: Create a RollingUpdate DaemonSet Apr 16 23:56:49.494: INFO: Check that daemon pods launch on every node of the cluster Apr 16 23:56:49.506: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 23:56:49.522: INFO: Number of nodes with available pods: 0 Apr 16 23:56:49.522: INFO: Node latest-worker is running more than one daemon pod Apr 16 23:56:50.527: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 23:56:50.530: INFO: Number of nodes with available pods: 0 Apr 16 23:56:50.530: INFO: Node latest-worker is running more than one daemon pod Apr 16 23:56:51.560: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 23:56:51.565: INFO: Number of nodes with available pods: 0 Apr 16 23:56:51.565: INFO: Node latest-worker is running more than one daemon pod Apr 16 23:56:52.527: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 23:56:52.530: INFO: Number of nodes with available pods: 0 Apr 16 23:56:52.530: INFO: Node latest-worker is running more than one daemon pod Apr 16 23:56:53.527: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 23:56:53.531: INFO: Number of nodes with available pods: 2 Apr 16 23:56:53.531: INFO: Number of running nodes: 2, number of available pods: 2 Apr 16 23:56:53.531: INFO: Update the DaemonSet to trigger a rollout Apr 16 23:56:53.538: INFO: Updating DaemonSet daemon-set Apr 16 23:56:57.570: INFO: Roll back the DaemonSet before rollout is complete Apr 16 23:56:57.576: INFO: Updating DaemonSet daemon-set Apr 16 23:56:57.576: INFO: Make sure DaemonSet rollback is complete Apr 16 23:56:57.583: INFO: Wrong image for pod: daemon-set-mz45q. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 16 23:56:57.583: INFO: Pod daemon-set-mz45q is not available Apr 16 23:56:57.590: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 23:56:58.595: INFO: Wrong image for pod: daemon-set-mz45q. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 16 23:56:58.595: INFO: Pod daemon-set-mz45q is not available Apr 16 23:56:58.599: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 23:56:59.595: INFO: Pod daemon-set-q2zjz is not available Apr 16 23:56:59.599: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6843, will wait for the garbage collector to delete the pods Apr 16 23:56:59.663: INFO: Deleting DaemonSet.extensions daemon-set took: 6.271418ms Apr 16 23:56:59.963: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.22863ms Apr 16 23:57:13.079: INFO: Number of nodes with available pods: 0 Apr 16 23:57:13.079: INFO: Number of running nodes: 0, number of available pods: 0 Apr 16 23:57:13.081: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6843/daemonsets","resourceVersion":"8660989"},"items":null} Apr 16 23:57:13.083: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6843/pods","resourceVersion":"8660989"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:57:13.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6843" for this suite. • [SLOW TEST:23.758 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":54,"skipped":1007,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:57:13.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-dff4126c-5e7e-4b1c-a93a-c7bfd9b0cf50 STEP: Creating a pod to test consume secrets Apr 16 23:57:13.166: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-999482cf-98ae-4558-860b-955de0d5c90a" in namespace "projected-4287" to be "Succeeded or Failed" Apr 16 23:57:13.170: INFO: Pod "pod-projected-secrets-999482cf-98ae-4558-860b-955de0d5c90a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.71714ms Apr 16 23:57:15.174: INFO: Pod "pod-projected-secrets-999482cf-98ae-4558-860b-955de0d5c90a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008174855s Apr 16 23:57:17.179: INFO: Pod "pod-projected-secrets-999482cf-98ae-4558-860b-955de0d5c90a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012473149s STEP: Saw pod success Apr 16 23:57:17.179: INFO: Pod "pod-projected-secrets-999482cf-98ae-4558-860b-955de0d5c90a" satisfied condition "Succeeded or Failed" Apr 16 23:57:17.186: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-999482cf-98ae-4558-860b-955de0d5c90a container projected-secret-volume-test: STEP: delete the pod Apr 16 23:57:17.207: INFO: Waiting for pod pod-projected-secrets-999482cf-98ae-4558-860b-955de0d5c90a to disappear Apr 16 23:57:17.211: INFO: Pod pod-projected-secrets-999482cf-98ae-4558-860b-955de0d5c90a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:57:17.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4287" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":55,"skipped":1033,"failed":0} SSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:57:17.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating secret secrets-3771/secret-test-823946db-74fe-43e5-8f7d-de1e332688b4 STEP: Creating a pod to test consume secrets Apr 16 23:57:17.309: INFO: Waiting up to 5m0s for pod "pod-configmaps-7a7eae40-71c0-46f9-aa7c-d16cf427e3b5" in namespace "secrets-3771" to be "Succeeded or Failed" Apr 16 23:57:17.319: INFO: Pod "pod-configmaps-7a7eae40-71c0-46f9-aa7c-d16cf427e3b5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.153873ms Apr 16 23:57:19.323: INFO: Pod "pod-configmaps-7a7eae40-71c0-46f9-aa7c-d16cf427e3b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013724024s Apr 16 23:57:21.327: INFO: Pod "pod-configmaps-7a7eae40-71c0-46f9-aa7c-d16cf427e3b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017618766s STEP: Saw pod success Apr 16 23:57:21.327: INFO: Pod "pod-configmaps-7a7eae40-71c0-46f9-aa7c-d16cf427e3b5" satisfied condition "Succeeded or Failed" Apr 16 23:57:21.330: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-7a7eae40-71c0-46f9-aa7c-d16cf427e3b5 container env-test: STEP: delete the pod Apr 16 23:57:21.351: INFO: Waiting for pod pod-configmaps-7a7eae40-71c0-46f9-aa7c-d16cf427e3b5 to disappear Apr 16 23:57:21.362: INFO: Pod pod-configmaps-7a7eae40-71c0-46f9-aa7c-d16cf427e3b5 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:57:21.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3771" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":56,"skipped":1036,"failed":0} ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:57:21.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-858.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-858.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-858.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-858.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-858.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-858.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 16 23:57:27.499: INFO: DNS probes using dns-858/dns-test-3ec8ff30-ef08-4b77-937d-765e44e25e2d succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:57:27.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-858" for this suite. • [SLOW TEST:6.240 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":57,"skipped":1036,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:57:27.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Apr 16 23:57:27.945: INFO: Created pod &Pod{ObjectMeta:{dns-231 dns-231 /api/v1/namespaces/dns-231/pods/dns-231 267000bd-17d0-4566-8abd-ea6045d73b44 8661138 0 2020-04-16 23:57:27 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2h4pd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2h4pd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2h4pd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 16 23:57:27.985: INFO: The status of Pod dns-231 is Pending, waiting for it to be Running (with Ready = true) Apr 16 23:57:29.989: INFO: The status of Pod dns-231 is Pending, waiting for it to be Running (with Ready = true) Apr 16 23:57:31.989: INFO: The status of Pod dns-231 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Apr 16 23:57:31.989: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-231 PodName:dns-231 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 16 23:57:31.989: INFO: >>> kubeConfig: /root/.kube/config I0416 23:57:32.022469 7 log.go:172] (0xc002d0a420) (0xc001d520a0) Create stream I0416 23:57:32.022507 7 log.go:172] (0xc002d0a420) (0xc001d520a0) Stream added, broadcasting: 1 I0416 23:57:32.024305 7 log.go:172] (0xc002d0a420) Reply frame received for 1 I0416 23:57:32.024340 7 log.go:172] (0xc002d0a420) (0xc001ce2000) Create stream I0416 23:57:32.024351 7 log.go:172] (0xc002d0a420) (0xc001ce2000) Stream added, broadcasting: 3 I0416 23:57:32.025558 7 log.go:172] (0xc002d0a420) Reply frame received for 3 I0416 23:57:32.025595 7 log.go:172] (0xc002d0a420) (0xc001d521e0) Create stream I0416 23:57:32.025609 7 log.go:172] (0xc002d0a420) (0xc001d521e0) Stream added, broadcasting: 5 I0416 23:57:32.026492 7 log.go:172] (0xc002d0a420) Reply frame received for 5 I0416 23:57:32.120954 7 log.go:172] (0xc002d0a420) Data frame received for 3 I0416 23:57:32.120989 7 log.go:172] (0xc001ce2000) (3) Data frame handling I0416 23:57:32.121013 7 log.go:172] (0xc001ce2000) (3) Data frame sent I0416 23:57:32.122456 7 log.go:172] (0xc002d0a420) Data frame received for 5 I0416 23:57:32.122476 7 log.go:172] (0xc001d521e0) (5) Data frame handling I0416 23:57:32.122699 7 log.go:172] (0xc002d0a420) Data frame received for 3 I0416 23:57:32.122711 7 log.go:172] (0xc001ce2000) (3) Data frame handling I0416 23:57:32.124130 7 log.go:172] (0xc002d0a420) Data frame received for 1 I0416 23:57:32.124150 7 log.go:172] (0xc001d520a0) (1) Data frame handling I0416 23:57:32.124160 7 log.go:172] (0xc001d520a0) (1) Data frame sent I0416 23:57:32.124177 7 log.go:172] (0xc002d0a420) (0xc001d520a0) Stream removed, broadcasting: 1 I0416 23:57:32.124198 7 log.go:172] (0xc002d0a420) Go away received I0416 23:57:32.124349 7 log.go:172] (0xc002d0a420) (0xc001d520a0) Stream removed, broadcasting: 1 I0416 23:57:32.124403 7 log.go:172] (0xc002d0a420) (0xc001ce2000) Stream removed, broadcasting: 3 I0416 23:57:32.124421 7 log.go:172] (0xc002d0a420) (0xc001d521e0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Apr 16 23:57:32.124: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-231 PodName:dns-231 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 16 23:57:32.124: INFO: >>> kubeConfig: /root/.kube/config I0416 23:57:32.153034 7 log.go:172] (0xc0027ef3f0) (0xc001ce2500) Create stream I0416 23:57:32.153077 7 log.go:172] (0xc0027ef3f0) (0xc001ce2500) Stream added, broadcasting: 1 I0416 23:57:32.160610 7 log.go:172] (0xc0027ef3f0) Reply frame received for 1 I0416 23:57:32.160663 7 log.go:172] (0xc0027ef3f0) (0xc001ce2640) Create stream I0416 23:57:32.160683 7 log.go:172] (0xc0027ef3f0) (0xc001ce2640) Stream added, broadcasting: 3 I0416 23:57:32.162100 7 log.go:172] (0xc0027ef3f0) Reply frame received for 3 I0416 23:57:32.162132 7 log.go:172] (0xc0027ef3f0) (0xc001d523c0) Create stream I0416 23:57:32.162144 7 log.go:172] (0xc0027ef3f0) (0xc001d523c0) Stream added, broadcasting: 5 I0416 23:57:32.163428 7 log.go:172] (0xc0027ef3f0) Reply frame received for 5 I0416 23:57:32.243133 7 log.go:172] (0xc0027ef3f0) Data frame received for 3 I0416 23:57:32.243163 7 log.go:172] (0xc001ce2640) (3) Data frame handling I0416 23:57:32.243184 7 log.go:172] (0xc001ce2640) (3) Data frame sent I0416 23:57:32.244105 7 log.go:172] (0xc0027ef3f0) Data frame received for 5 I0416 23:57:32.244130 7 log.go:172] (0xc001d523c0) (5) Data frame handling I0416 23:57:32.244149 7 log.go:172] (0xc0027ef3f0) Data frame received for 3 I0416 23:57:32.244176 7 log.go:172] (0xc001ce2640) (3) Data frame handling I0416 23:57:32.245915 7 log.go:172] (0xc0027ef3f0) Data frame received for 1 I0416 23:57:32.245934 7 log.go:172] (0xc001ce2500) (1) Data frame handling I0416 23:57:32.245954 7 log.go:172] (0xc001ce2500) (1) Data frame sent I0416 23:57:32.245973 7 log.go:172] (0xc0027ef3f0) (0xc001ce2500) Stream removed, broadcasting: 1 I0416 23:57:32.245988 7 log.go:172] (0xc0027ef3f0) Go away received I0416 23:57:32.246095 7 log.go:172] (0xc0027ef3f0) (0xc001ce2500) Stream removed, broadcasting: 1 I0416 23:57:32.246156 7 log.go:172] (0xc0027ef3f0) (0xc001ce2640) Stream removed, broadcasting: 3 I0416 23:57:32.246188 7 log.go:172] (0xc0027ef3f0) (0xc001d523c0) Stream removed, broadcasting: 5 Apr 16 23:57:32.246: INFO: Deleting pod dns-231... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:57:32.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-231" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":58,"skipped":1071,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:57:32.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288 STEP: creating an pod Apr 16 23:57:32.336: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-6874 -- logs-generator --log-lines-total 100 --run-duration 20s' Apr 16 23:57:32.488: INFO: stderr: "" Apr 16 23:57:32.488: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Waiting for log generator to start. Apr 16 23:57:32.488: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Apr 16 23:57:32.488: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-6874" to be "running and ready, or succeeded" Apr 16 23:57:32.555: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 66.786811ms Apr 16 23:57:34.559: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070592206s Apr 16 23:57:36.563: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.074655385s Apr 16 23:57:36.563: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Apr 16 23:57:36.563: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Apr 16 23:57:36.563: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6874' Apr 16 23:57:36.685: INFO: stderr: "" Apr 16 23:57:36.685: INFO: stdout: "I0416 23:57:34.533929 1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/bvs6 597\nI0416 23:57:34.734084 1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/p2n7 389\nI0416 23:57:34.934127 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/rbfz 511\nI0416 23:57:35.134148 1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/swd 281\nI0416 23:57:35.334161 1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/qv25 305\nI0416 23:57:35.534103 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/w7q8 284\nI0416 23:57:35.734099 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/57rh 480\nI0416 23:57:35.934066 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/k8c 425\nI0416 23:57:36.134103 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/qr6w 460\nI0416 23:57:36.334143 1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/4rhg 464\nI0416 23:57:36.534115 1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/68dz 418\n" STEP: limiting log lines Apr 16 23:57:36.685: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6874 --tail=1' Apr 16 23:57:36.791: INFO: stderr: "" Apr 16 23:57:36.791: INFO: stdout: "I0416 23:57:36.734080 1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/lzm 222\n" Apr 16 23:57:36.791: INFO: got output "I0416 23:57:36.734080 1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/lzm 222\n" STEP: limiting log bytes Apr 16 23:57:36.791: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6874 --limit-bytes=1' Apr 16 23:57:36.893: INFO: stderr: "" Apr 16 23:57:36.893: INFO: stdout: "I" Apr 16 23:57:36.893: INFO: got output "I" STEP: exposing timestamps Apr 16 23:57:36.893: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6874 --tail=1 --timestamps' Apr 16 23:57:37.005: INFO: stderr: "" Apr 16 23:57:37.005: INFO: stdout: "2020-04-16T23:57:36.934289192Z I0416 23:57:36.934069 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/kube-system/pods/zqph 571\n" Apr 16 23:57:37.005: INFO: got output "2020-04-16T23:57:36.934289192Z I0416 23:57:36.934069 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/kube-system/pods/zqph 571\n" STEP: restricting to a time range Apr 16 23:57:39.505: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6874 --since=1s' Apr 16 23:57:39.619: INFO: stderr: "" Apr 16 23:57:39.619: INFO: stdout: "I0416 23:57:38.734075 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/ghhs 463\nI0416 23:57:38.934144 1 logs_generator.go:76] 22 GET /api/v1/namespaces/default/pods/v7k 240\nI0416 23:57:39.134068 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/ns/pods/l5g 599\nI0416 23:57:39.334110 1 logs_generator.go:76] 24 GET /api/v1/namespaces/ns/pods/xsj 410\nI0416 23:57:39.534086 1 logs_generator.go:76] 25 POST /api/v1/namespaces/ns/pods/b5s 390\n" Apr 16 23:57:39.619: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6874 --since=24h' Apr 16 23:57:39.722: INFO: stderr: "" Apr 16 23:57:39.722: INFO: stdout: "I0416 23:57:34.533929 1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/bvs6 597\nI0416 23:57:34.734084 1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/p2n7 389\nI0416 23:57:34.934127 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/rbfz 511\nI0416 23:57:35.134148 1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/swd 281\nI0416 23:57:35.334161 1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/qv25 305\nI0416 23:57:35.534103 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/w7q8 284\nI0416 23:57:35.734099 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/57rh 480\nI0416 23:57:35.934066 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/k8c 425\nI0416 23:57:36.134103 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/qr6w 460\nI0416 23:57:36.334143 1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/4rhg 464\nI0416 23:57:36.534115 1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/68dz 418\nI0416 23:57:36.734080 1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/lzm 222\nI0416 23:57:36.934069 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/kube-system/pods/zqph 571\nI0416 23:57:37.134102 1 logs_generator.go:76] 13 POST /api/v1/namespaces/default/pods/4s5 442\nI0416 23:57:37.334117 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/kube-system/pods/5m5 206\nI0416 23:57:37.534147 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/kube-system/pods/7dxt 347\nI0416 23:57:37.734101 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/default/pods/gw5 228\nI0416 23:57:37.934117 1 logs_generator.go:76] 17 GET /api/v1/namespaces/default/pods/sgw9 331\nI0416 23:57:38.134119 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/ns/pods/pwdf 385\nI0416 23:57:38.334100 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/jfjw 563\nI0416 23:57:38.534126 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/zhg 359\nI0416 23:57:38.734075 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/ghhs 463\nI0416 23:57:38.934144 1 logs_generator.go:76] 22 GET /api/v1/namespaces/default/pods/v7k 240\nI0416 23:57:39.134068 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/ns/pods/l5g 599\nI0416 23:57:39.334110 1 logs_generator.go:76] 24 GET /api/v1/namespaces/ns/pods/xsj 410\nI0416 23:57:39.534086 1 logs_generator.go:76] 25 POST /api/v1/namespaces/ns/pods/b5s 390\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294 Apr 16 23:57:39.722: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-6874' Apr 16 23:57:52.746: INFO: stderr: "" Apr 16 23:57:52.746: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:57:52.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6874" for this suite. • [SLOW TEST:20.462 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":275,"completed":59,"skipped":1080,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:57:52.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:57:52.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4941" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":60,"skipped":1094,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:57:52.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:57:52.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5746" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":61,"skipped":1107,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:57:52.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 16 23:57:56.124: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 23:57:56.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9628" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":62,"skipped":1145,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 23:57:56.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-865afbc9-2f1e-484e-a18d-ad2c4b5743b3 in namespace container-probe-3366 Apr 16 23:58:00.457: INFO: Started pod liveness-865afbc9-2f1e-484e-a18d-ad2c4b5743b3 in namespace container-probe-3366 STEP: checking the pod's current state and verifying that restartCount is present Apr 16 23:58:00.460: INFO: Initial restart count of pod liveness-865afbc9-2f1e-484e-a18d-ad2c4b5743b3 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:02:01.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3366" for this suite. • [SLOW TEST:245.152 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":63,"skipped":1156,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:02:01.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:02:05.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8543" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":64,"skipped":1203,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:02:05.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-c7806507-1fed-4633-9255-3ee724421944 STEP: Creating a pod to test consume configMaps Apr 17 00:02:05.673: INFO: Waiting up to 5m0s for pod "pod-configmaps-0d74ca74-921a-4345-8e64-f8fc9ccdf1f2" in namespace "configmap-6776" to be "Succeeded or Failed" Apr 17 00:02:05.677: INFO: Pod "pod-configmaps-0d74ca74-921a-4345-8e64-f8fc9ccdf1f2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.4028ms Apr 17 00:02:07.694: INFO: Pod "pod-configmaps-0d74ca74-921a-4345-8e64-f8fc9ccdf1f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021914906s Apr 17 00:02:09.699: INFO: Pod "pod-configmaps-0d74ca74-921a-4345-8e64-f8fc9ccdf1f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026218285s STEP: Saw pod success Apr 17 00:02:09.699: INFO: Pod "pod-configmaps-0d74ca74-921a-4345-8e64-f8fc9ccdf1f2" satisfied condition "Succeeded or Failed" Apr 17 00:02:09.702: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-0d74ca74-921a-4345-8e64-f8fc9ccdf1f2 container configmap-volume-test: STEP: delete the pod Apr 17 00:02:09.722: INFO: Waiting for pod pod-configmaps-0d74ca74-921a-4345-8e64-f8fc9ccdf1f2 to disappear Apr 17 00:02:09.742: INFO: Pod pod-configmaps-0d74ca74-921a-4345-8e64-f8fc9ccdf1f2 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:02:09.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6776" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":65,"skipped":1204,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:02:09.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 17 00:02:09.809: INFO: Waiting up to 5m0s for pod "downward-api-bbc9de41-afaa-4c7e-b430-bb7aca1b9917" in namespace "downward-api-4004" to be "Succeeded or Failed" Apr 17 00:02:09.823: INFO: Pod "downward-api-bbc9de41-afaa-4c7e-b430-bb7aca1b9917": Phase="Pending", Reason="", readiness=false. Elapsed: 13.427386ms Apr 17 00:02:11.826: INFO: Pod "downward-api-bbc9de41-afaa-4c7e-b430-bb7aca1b9917": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017019539s Apr 17 00:02:13.829: INFO: Pod "downward-api-bbc9de41-afaa-4c7e-b430-bb7aca1b9917": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019634055s STEP: Saw pod success Apr 17 00:02:13.829: INFO: Pod "downward-api-bbc9de41-afaa-4c7e-b430-bb7aca1b9917" satisfied condition "Succeeded or Failed" Apr 17 00:02:13.831: INFO: Trying to get logs from node latest-worker pod downward-api-bbc9de41-afaa-4c7e-b430-bb7aca1b9917 container dapi-container: STEP: delete the pod Apr 17 00:02:13.944: INFO: Waiting for pod downward-api-bbc9de41-afaa-4c7e-b430-bb7aca1b9917 to disappear Apr 17 00:02:13.998: INFO: Pod downward-api-bbc9de41-afaa-4c7e-b430-bb7aca1b9917 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:02:13.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4004" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":66,"skipped":1219,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:02:14.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 17 00:02:18.165: INFO: &Pod{ObjectMeta:{send-events-2e0f6d11-ef33-4097-8bb9-0624bea8066a events-5694 /api/v1/namespaces/events-5694/pods/send-events-2e0f6d11-ef33-4097-8bb9-0624bea8066a 9bdc3698-b8b0-4201-aeb5-ad83faa8a4f1 8662179 0 2020-04-17 00:02:14 +0000 UTC map[name:foo time:132130814] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kd2nq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kd2nq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kd2nq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:02:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:02:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:02:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:02:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.193,StartTime:2020-04-17 00:02:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-17 00:02:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://0fc97c2b56a85860d6abb500e3070381ba564fcc92ab2fc739b9b26ebccda1e6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.193,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Apr 17 00:02:20.170: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 17 00:02:22.189: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:02:22.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5694" for this suite. • [SLOW TEST:8.134 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":275,"completed":67,"skipped":1275,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:02:22.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 17 00:02:22.284: INFO: Waiting up to 5m0s for pod "pod-97cc970f-b21b-4fac-a3ff-4894cadd2242" in namespace "emptydir-4399" to be "Succeeded or Failed" Apr 17 00:02:22.329: INFO: Pod "pod-97cc970f-b21b-4fac-a3ff-4894cadd2242": Phase="Pending", Reason="", readiness=false. Elapsed: 45.042823ms Apr 17 00:02:24.348: INFO: Pod "pod-97cc970f-b21b-4fac-a3ff-4894cadd2242": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063580366s Apr 17 00:02:26.352: INFO: Pod "pod-97cc970f-b21b-4fac-a3ff-4894cadd2242": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067507141s STEP: Saw pod success Apr 17 00:02:26.352: INFO: Pod "pod-97cc970f-b21b-4fac-a3ff-4894cadd2242" satisfied condition "Succeeded or Failed" Apr 17 00:02:26.355: INFO: Trying to get logs from node latest-worker2 pod pod-97cc970f-b21b-4fac-a3ff-4894cadd2242 container test-container: STEP: delete the pod Apr 17 00:02:26.388: INFO: Waiting for pod pod-97cc970f-b21b-4fac-a3ff-4894cadd2242 to disappear Apr 17 00:02:26.396: INFO: Pod pod-97cc970f-b21b-4fac-a3ff-4894cadd2242 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:02:26.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4399" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":68,"skipped":1297,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:02:26.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:02:26.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6101" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":69,"skipped":1329,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:02:26.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on tmpfs Apr 17 00:02:26.643: INFO: Waiting up to 5m0s for pod "pod-c39fd500-d1d4-48e7-97fd-8428931eb484" in namespace "emptydir-4114" to be "Succeeded or Failed" Apr 17 00:02:26.648: INFO: Pod "pod-c39fd500-d1d4-48e7-97fd-8428931eb484": Phase="Pending", Reason="", readiness=false. Elapsed: 4.180725ms Apr 17 00:02:28.652: INFO: Pod "pod-c39fd500-d1d4-48e7-97fd-8428931eb484": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008401816s Apr 17 00:02:30.656: INFO: Pod "pod-c39fd500-d1d4-48e7-97fd-8428931eb484": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012832536s STEP: Saw pod success Apr 17 00:02:30.656: INFO: Pod "pod-c39fd500-d1d4-48e7-97fd-8428931eb484" satisfied condition "Succeeded or Failed" Apr 17 00:02:30.659: INFO: Trying to get logs from node latest-worker2 pod pod-c39fd500-d1d4-48e7-97fd-8428931eb484 container test-container: STEP: delete the pod Apr 17 00:02:30.682: INFO: Waiting for pod pod-c39fd500-d1d4-48e7-97fd-8428931eb484 to disappear Apr 17 00:02:30.686: INFO: Pod pod-c39fd500-d1d4-48e7-97fd-8428931eb484 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:02:30.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4114" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":70,"skipped":1337,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:02:30.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-09796776-2ace-4073-94e3-b30b58eeaea0 STEP: Creating a pod to test consume configMaps Apr 17 00:02:30.792: INFO: Waiting up to 5m0s for pod "pod-configmaps-1d0b1520-5fa2-4c1a-908c-a2c11cb226b0" in namespace "configmap-3151" to be "Succeeded or Failed" Apr 17 00:02:30.800: INFO: Pod "pod-configmaps-1d0b1520-5fa2-4c1a-908c-a2c11cb226b0": Phase="Pending", Reason="", readiness=false. Elapsed: 7.076456ms Apr 17 00:02:32.804: INFO: Pod "pod-configmaps-1d0b1520-5fa2-4c1a-908c-a2c11cb226b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011671538s Apr 17 00:02:34.809: INFO: Pod "pod-configmaps-1d0b1520-5fa2-4c1a-908c-a2c11cb226b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016455085s STEP: Saw pod success Apr 17 00:02:34.809: INFO: Pod "pod-configmaps-1d0b1520-5fa2-4c1a-908c-a2c11cb226b0" satisfied condition "Succeeded or Failed" Apr 17 00:02:34.812: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-1d0b1520-5fa2-4c1a-908c-a2c11cb226b0 container configmap-volume-test: STEP: delete the pod Apr 17 00:02:34.873: INFO: Waiting for pod pod-configmaps-1d0b1520-5fa2-4c1a-908c-a2c11cb226b0 to disappear Apr 17 00:02:34.959: INFO: Pod pod-configmaps-1d0b1520-5fa2-4c1a-908c-a2c11cb226b0 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:02:34.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3151" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":71,"skipped":1338,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:02:34.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 17 00:02:35.490: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 17 00:02:37.499: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722678555, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722678555, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722678555, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722678555, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 17 00:02:40.531: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 00:02:40.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:02:41.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-3090" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:6.983 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":72,"skipped":1351,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:02:41.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0417 00:02:43.089243 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 17 00:02:43.089: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:02:43.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6951" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":73,"skipped":1352,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:02:43.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 17 00:02:43.294: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cfd9192c-002a-48d3-8011-fbbf4dabda4f" in namespace "projected-4321" to be "Succeeded or Failed" Apr 17 00:02:43.297: INFO: Pod "downwardapi-volume-cfd9192c-002a-48d3-8011-fbbf4dabda4f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.822913ms Apr 17 00:02:45.359: INFO: Pod "downwardapi-volume-cfd9192c-002a-48d3-8011-fbbf4dabda4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065651708s Apr 17 00:02:47.364: INFO: Pod "downwardapi-volume-cfd9192c-002a-48d3-8011-fbbf4dabda4f": Phase="Running", Reason="", readiness=true. Elapsed: 4.069905889s Apr 17 00:02:49.368: INFO: Pod "downwardapi-volume-cfd9192c-002a-48d3-8011-fbbf4dabda4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.074332766s STEP: Saw pod success Apr 17 00:02:49.368: INFO: Pod "downwardapi-volume-cfd9192c-002a-48d3-8011-fbbf4dabda4f" satisfied condition "Succeeded or Failed" Apr 17 00:02:49.371: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-cfd9192c-002a-48d3-8011-fbbf4dabda4f container client-container: STEP: delete the pod Apr 17 00:02:49.405: INFO: Waiting for pod downwardapi-volume-cfd9192c-002a-48d3-8011-fbbf4dabda4f to disappear Apr 17 00:02:49.420: INFO: Pod downwardapi-volume-cfd9192c-002a-48d3-8011-fbbf4dabda4f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:02:49.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4321" for this suite. • [SLOW TEST:6.357 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":74,"skipped":1357,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:02:49.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 00:02:50.231: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8436' Apr 17 00:02:53.695: INFO: stderr: "" Apr 17 00:02:53.695: INFO: stdout: "replicationcontroller/agnhost-master created\n" Apr 17 00:02:53.695: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8436' Apr 17 00:02:54.040: INFO: stderr: "" Apr 17 00:02:54.040: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 17 00:02:55.044: INFO: Selector matched 1 pods for map[app:agnhost] Apr 17 00:02:55.044: INFO: Found 0 / 1 Apr 17 00:02:56.045: INFO: Selector matched 1 pods for map[app:agnhost] Apr 17 00:02:56.045: INFO: Found 0 / 1 Apr 17 00:02:57.044: INFO: Selector matched 1 pods for map[app:agnhost] Apr 17 00:02:57.044: INFO: Found 0 / 1 Apr 17 00:02:58.044: INFO: Selector matched 1 pods for map[app:agnhost] Apr 17 00:02:58.044: INFO: Found 1 / 1 Apr 17 00:02:58.045: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 17 00:02:58.048: INFO: Selector matched 1 pods for map[app:agnhost] Apr 17 00:02:58.048: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 17 00:02:58.048: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe pod agnhost-master-z5256 --namespace=kubectl-8436' Apr 17 00:02:58.158: INFO: stderr: "" Apr 17 00:02:58.158: INFO: stdout: "Name: agnhost-master-z5256\nNamespace: kubectl-8436\nPriority: 0\nNode: latest-worker2/172.17.0.12\nStart Time: Fri, 17 Apr 2020 00:02:53 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.168\nIPs:\n IP: 10.244.1.168\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://2c4a343c06ee93b8aa582432d448add4b3bdf37c414387a783329a2141be3e49\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 17 Apr 2020 00:02:56 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-72ck5 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-72ck5:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-72ck5\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-8436/agnhost-master-z5256 to latest-worker2\n Normal Pulled 3s kubelet, latest-worker2 Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n Normal Created 2s kubelet, latest-worker2 Created container agnhost-master\n Normal Started 2s kubelet, latest-worker2 Started container agnhost-master\n" Apr 17 00:02:58.158: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-8436' Apr 17 00:02:58.276: INFO: stderr: "" Apr 17 00:02:58.276: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-8436\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: agnhost-master-z5256\n" Apr 17 00:02:58.276: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-8436' Apr 17 00:02:58.372: INFO: stderr: "" Apr 17 00:02:58.372: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-8436\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.96.138.136\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.168:6379\nSession Affinity: None\nEvents: \n" Apr 17 00:02:58.375: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe node latest-control-plane' Apr 17 00:02:58.493: INFO: stderr: "" Apr 17 00:02:58.493: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:27:32 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Fri, 17 Apr 2020 00:02:55 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Thu, 16 Apr 2020 23:58:24 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 16 Apr 2020 23:58:24 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 16 Apr 2020 23:58:24 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 16 Apr 2020 23:58:24 +0000 Sun, 15 Mar 2020 18:28:05 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 96fd1b5d260b433d8f617f455164eb5a\n System UUID: 611bedf3-8581-4e6e-a43b-01a437bb59ad\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.17.0\n Kube-Proxy Version: v1.17.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-f7wtl 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 32d\n kube-system coredns-6955765f44-lq4t7 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 32d\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 32d\n kube-system kindnet-sx5s7 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 32d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 32d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 32d\n kube-system kube-proxy-jpqvf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 32d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 32d\n local-path-storage local-path-provisioner-7745554f7f-fmsmz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 32d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Apr 17 00:02:58.494: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe namespace kubectl-8436' Apr 17 00:02:58.607: INFO: stderr: "" Apr 17 00:02:58.607: INFO: stdout: "Name: kubectl-8436\nLabels: e2e-framework=kubectl\n e2e-run=100d5a7b-98b4-4a9b-9804-b0c331afa0ed\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:02:58.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8436" for this suite. • [SLOW TEST:9.160 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:978 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":275,"completed":75,"skipped":1448,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:02:58.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-b4x8 STEP: Creating a pod to test atomic-volume-subpath Apr 17 00:02:58.730: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-b4x8" in namespace "subpath-2070" to be "Succeeded or Failed" Apr 17 00:02:58.747: INFO: Pod "pod-subpath-test-configmap-b4x8": Phase="Pending", Reason="", readiness=false. Elapsed: 16.545023ms Apr 17 00:03:00.755: INFO: Pod "pod-subpath-test-configmap-b4x8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025058512s Apr 17 00:03:02.759: INFO: Pod "pod-subpath-test-configmap-b4x8": Phase="Running", Reason="", readiness=true. Elapsed: 4.028631402s Apr 17 00:03:04.763: INFO: Pod "pod-subpath-test-configmap-b4x8": Phase="Running", Reason="", readiness=true. Elapsed: 6.033096997s Apr 17 00:03:06.785: INFO: Pod "pod-subpath-test-configmap-b4x8": Phase="Running", Reason="", readiness=true. Elapsed: 8.054756603s Apr 17 00:03:08.789: INFO: Pod "pod-subpath-test-configmap-b4x8": Phase="Running", Reason="", readiness=true. Elapsed: 10.059029235s Apr 17 00:03:10.794: INFO: Pod "pod-subpath-test-configmap-b4x8": Phase="Running", Reason="", readiness=true. Elapsed: 12.063280866s Apr 17 00:03:12.798: INFO: Pod "pod-subpath-test-configmap-b4x8": Phase="Running", Reason="", readiness=true. Elapsed: 14.067638705s Apr 17 00:03:14.802: INFO: Pod "pod-subpath-test-configmap-b4x8": Phase="Running", Reason="", readiness=true. Elapsed: 16.071780592s Apr 17 00:03:16.806: INFO: Pod "pod-subpath-test-configmap-b4x8": Phase="Running", Reason="", readiness=true. Elapsed: 18.07596125s Apr 17 00:03:18.811: INFO: Pod "pod-subpath-test-configmap-b4x8": Phase="Running", Reason="", readiness=true. Elapsed: 20.080510683s Apr 17 00:03:20.815: INFO: Pod "pod-subpath-test-configmap-b4x8": Phase="Running", Reason="", readiness=true. Elapsed: 22.084575152s Apr 17 00:03:22.819: INFO: Pod "pod-subpath-test-configmap-b4x8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.088728548s STEP: Saw pod success Apr 17 00:03:22.819: INFO: Pod "pod-subpath-test-configmap-b4x8" satisfied condition "Succeeded or Failed" Apr 17 00:03:22.823: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-b4x8 container test-container-subpath-configmap-b4x8: STEP: delete the pod Apr 17 00:03:22.862: INFO: Waiting for pod pod-subpath-test-configmap-b4x8 to disappear Apr 17 00:03:22.893: INFO: Pod pod-subpath-test-configmap-b4x8 no longer exists STEP: Deleting pod pod-subpath-test-configmap-b4x8 Apr 17 00:03:22.893: INFO: Deleting pod "pod-subpath-test-configmap-b4x8" in namespace "subpath-2070" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:03:22.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2070" for this suite. • [SLOW TEST:24.289 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":76,"skipped":1457,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:03:22.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 17 00:03:22.968: INFO: Waiting up to 5m0s for pod "downwardapi-volume-28eec4cb-bbf8-4bc6-91bc-b4fc048c2424" in namespace "downward-api-153" to be "Succeeded or Failed" Apr 17 00:03:22.990: INFO: Pod "downwardapi-volume-28eec4cb-bbf8-4bc6-91bc-b4fc048c2424": Phase="Pending", Reason="", readiness=false. Elapsed: 22.076934ms Apr 17 00:03:24.994: INFO: Pod "downwardapi-volume-28eec4cb-bbf8-4bc6-91bc-b4fc048c2424": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026155064s Apr 17 00:03:26.999: INFO: Pod "downwardapi-volume-28eec4cb-bbf8-4bc6-91bc-b4fc048c2424": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030586623s STEP: Saw pod success Apr 17 00:03:26.999: INFO: Pod "downwardapi-volume-28eec4cb-bbf8-4bc6-91bc-b4fc048c2424" satisfied condition "Succeeded or Failed" Apr 17 00:03:27.002: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-28eec4cb-bbf8-4bc6-91bc-b4fc048c2424 container client-container: STEP: delete the pod Apr 17 00:03:27.036: INFO: Waiting for pod downwardapi-volume-28eec4cb-bbf8-4bc6-91bc-b4fc048c2424 to disappear Apr 17 00:03:27.050: INFO: Pod downwardapi-volume-28eec4cb-bbf8-4bc6-91bc-b4fc048c2424 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:03:27.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-153" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":77,"skipped":1468,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:03:27.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-7642, will wait for the garbage collector to delete the pods Apr 17 00:03:31.259: INFO: Deleting Job.batch foo took: 8.260155ms Apr 17 00:03:31.559: INFO: Terminating Job.batch foo pods took: 300.289888ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:04:13.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7642" for this suite. • [SLOW TEST:46.014 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":78,"skipped":1496,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:04:13.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 17 00:04:13.692: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 17 00:04:15.702: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722678653, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722678653, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722678653, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722678653, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 17 00:04:17.707: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722678653, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722678653, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722678653, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722678653, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 17 00:04:20.733: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:04:20.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2035" for this suite. STEP: Destroying namespace "webhook-2035-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.878 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":79,"skipped":1533,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:04:20.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 17 00:04:21.055: INFO: Waiting up to 5m0s for pod "pod-4b2975c6-346d-437e-b50d-4f03b0868962" in namespace "emptydir-727" to be "Succeeded or Failed" Apr 17 00:04:21.066: INFO: Pod "pod-4b2975c6-346d-437e-b50d-4f03b0868962": Phase="Pending", Reason="", readiness=false. Elapsed: 10.130053ms Apr 17 00:04:23.069: INFO: Pod "pod-4b2975c6-346d-437e-b50d-4f03b0868962": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013496137s Apr 17 00:04:25.072: INFO: Pod "pod-4b2975c6-346d-437e-b50d-4f03b0868962": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016808995s STEP: Saw pod success Apr 17 00:04:25.072: INFO: Pod "pod-4b2975c6-346d-437e-b50d-4f03b0868962" satisfied condition "Succeeded or Failed" Apr 17 00:04:25.075: INFO: Trying to get logs from node latest-worker2 pod pod-4b2975c6-346d-437e-b50d-4f03b0868962 container test-container: STEP: delete the pod Apr 17 00:04:25.110: INFO: Waiting for pod pod-4b2975c6-346d-437e-b50d-4f03b0868962 to disappear Apr 17 00:04:25.120: INFO: Pod pod-4b2975c6-346d-437e-b50d-4f03b0868962 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:04:25.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-727" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":80,"skipped":1555,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:04:25.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 17 00:04:27.920: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 17 00:04:29.930: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722678667, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722678667, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722678667, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722678667, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 17 00:04:31.953: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722678667, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722678667, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722678667, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722678667, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 17 00:04:34.990: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:04:35.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6403" for this suite. STEP: Destroying namespace "webhook-6403-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.055 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":81,"skipped":1560,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:04:35.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-56a818a6-cdb4-4a67-8d37-8299ac7254fa STEP: Creating a pod to test consume configMaps Apr 17 00:04:35.367: INFO: Waiting up to 5m0s for pod "pod-configmaps-f24c5582-6ec8-4264-8018-f44151748804" in namespace "configmap-9448" to be "Succeeded or Failed" Apr 17 00:04:35.370: INFO: Pod "pod-configmaps-f24c5582-6ec8-4264-8018-f44151748804": Phase="Pending", Reason="", readiness=false. Elapsed: 2.437515ms Apr 17 00:04:37.373: INFO: Pod "pod-configmaps-f24c5582-6ec8-4264-8018-f44151748804": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005822135s Apr 17 00:04:39.377: INFO: Pod "pod-configmaps-f24c5582-6ec8-4264-8018-f44151748804": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009824662s STEP: Saw pod success Apr 17 00:04:39.377: INFO: Pod "pod-configmaps-f24c5582-6ec8-4264-8018-f44151748804" satisfied condition "Succeeded or Failed" Apr 17 00:04:39.380: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-f24c5582-6ec8-4264-8018-f44151748804 container configmap-volume-test: STEP: delete the pod Apr 17 00:04:39.413: INFO: Waiting for pod pod-configmaps-f24c5582-6ec8-4264-8018-f44151748804 to disappear Apr 17 00:04:39.425: INFO: Pod pod-configmaps-f24c5582-6ec8-4264-8018-f44151748804 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:04:39.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9448" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":82,"skipped":1576,"failed":0} SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:04:39.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 17 00:04:39.520: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 17 00:04:39.548: INFO: Waiting for terminating namespaces to be deleted... Apr 17 00:04:39.551: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 17 00:04:39.557: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 17 00:04:39.557: INFO: Container kindnet-cni ready: true, restart count 0 Apr 17 00:04:39.557: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 17 00:04:39.557: INFO: Container kube-proxy ready: true, restart count 0 Apr 17 00:04:39.557: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 17 00:04:39.579: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 17 00:04:39.579: INFO: Container kindnet-cni ready: true, restart count 0 Apr 17 00:04:39.579: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 17 00:04:39.579: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-df95f82d-7afd-4fef-a032-147d0e69dfe0 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-df95f82d-7afd-4fef-a032-147d0e69dfe0 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-df95f82d-7afd-4fef-a032-147d0e69dfe0 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:04:55.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4921" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:16.354 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":83,"skipped":1581,"failed":0} SSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:04:55.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 00:04:55.841: INFO: Waiting up to 5m0s for pod "busybox-user-65534-8dc6c6e9-f85b-49a3-9ab8-2fa301de02f9" in namespace "security-context-test-4128" to be "Succeeded or Failed" Apr 17 00:04:55.845: INFO: Pod "busybox-user-65534-8dc6c6e9-f85b-49a3-9ab8-2fa301de02f9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.686266ms Apr 17 00:04:57.849: INFO: Pod "busybox-user-65534-8dc6c6e9-f85b-49a3-9ab8-2fa301de02f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007943295s Apr 17 00:04:59.854: INFO: Pod "busybox-user-65534-8dc6c6e9-f85b-49a3-9ab8-2fa301de02f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012300963s Apr 17 00:04:59.854: INFO: Pod "busybox-user-65534-8dc6c6e9-f85b-49a3-9ab8-2fa301de02f9" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:04:59.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4128" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":84,"skipped":1584,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:04:59.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's command Apr 17 00:04:59.938: INFO: Waiting up to 5m0s for pod "var-expansion-12f5e0ee-b31c-4fc9-a082-3c0827d114d9" in namespace "var-expansion-2289" to be "Succeeded or Failed" Apr 17 00:04:59.941: INFO: Pod "var-expansion-12f5e0ee-b31c-4fc9-a082-3c0827d114d9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.704824ms Apr 17 00:05:01.966: INFO: Pod "var-expansion-12f5e0ee-b31c-4fc9-a082-3c0827d114d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028069837s Apr 17 00:05:04.008: INFO: Pod "var-expansion-12f5e0ee-b31c-4fc9-a082-3c0827d114d9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07045839s Apr 17 00:05:06.011: INFO: Pod "var-expansion-12f5e0ee-b31c-4fc9-a082-3c0827d114d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.073524671s STEP: Saw pod success Apr 17 00:05:06.011: INFO: Pod "var-expansion-12f5e0ee-b31c-4fc9-a082-3c0827d114d9" satisfied condition "Succeeded or Failed" Apr 17 00:05:06.014: INFO: Trying to get logs from node latest-worker pod var-expansion-12f5e0ee-b31c-4fc9-a082-3c0827d114d9 container dapi-container: STEP: delete the pod Apr 17 00:05:06.055: INFO: Waiting for pod var-expansion-12f5e0ee-b31c-4fc9-a082-3c0827d114d9 to disappear Apr 17 00:05:06.070: INFO: Pod var-expansion-12f5e0ee-b31c-4fc9-a082-3c0827d114d9 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:05:06.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2289" for this suite. • [SLOW TEST:6.215 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":85,"skipped":1594,"failed":0} SS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:05:06.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 00:05:06.184: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 17 00:05:06.191: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:05:06.195: INFO: Number of nodes with available pods: 0 Apr 17 00:05:06.195: INFO: Node latest-worker is running more than one daemon pod Apr 17 00:05:07.201: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:05:07.204: INFO: Number of nodes with available pods: 0 Apr 17 00:05:07.204: INFO: Node latest-worker is running more than one daemon pod Apr 17 00:05:08.200: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:05:08.203: INFO: Number of nodes with available pods: 0 Apr 17 00:05:08.203: INFO: Node latest-worker is running more than one daemon pod Apr 17 00:05:09.200: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:05:09.203: INFO: Number of nodes with available pods: 1 Apr 17 00:05:09.203: INFO: Node latest-worker is running more than one daemon pod Apr 17 00:05:10.199: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:05:10.202: INFO: Number of nodes with available pods: 2 Apr 17 00:05:10.202: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 17 00:05:10.283: INFO: Wrong image for pod: daemon-set-jgh54. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 17 00:05:10.283: INFO: Wrong image for pod: daemon-set-nxrj8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 17 00:05:10.299: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:05:11.304: INFO: Wrong image for pod: daemon-set-jgh54. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 17 00:05:11.304: INFO: Wrong image for pod: daemon-set-nxrj8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 17 00:05:11.308: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:05:12.334: INFO: Wrong image for pod: daemon-set-jgh54. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 17 00:05:12.334: INFO: Pod daemon-set-jgh54 is not available Apr 17 00:05:12.334: INFO: Wrong image for pod: daemon-set-nxrj8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 17 00:05:12.338: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:05:13.305: INFO: Wrong image for pod: daemon-set-jgh54. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 17 00:05:13.305: INFO: Pod daemon-set-jgh54 is not available Apr 17 00:05:13.305: INFO: Wrong image for pod: daemon-set-nxrj8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 17 00:05:13.308: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:05:14.304: INFO: Wrong image for pod: daemon-set-jgh54. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 17 00:05:14.304: INFO: Pod daemon-set-jgh54 is not available Apr 17 00:05:14.304: INFO: Wrong image for pod: daemon-set-nxrj8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 17 00:05:14.308: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:05:15.304: INFO: Wrong image for pod: daemon-set-jgh54. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 17 00:05:15.304: INFO: Pod daemon-set-jgh54 is not available Apr 17 00:05:15.304: INFO: Wrong image for pod: daemon-set-nxrj8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 17 00:05:15.308: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:05:16.303: INFO: Wrong image for pod: daemon-set-jgh54. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 17 00:05:16.303: INFO: Pod daemon-set-jgh54 is not available Apr 17 00:05:16.303: INFO: Wrong image for pod: daemon-set-nxrj8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 17 00:05:16.308: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:05:17.304: INFO: Wrong image for pod: daemon-set-jgh54. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 17 00:05:17.304: INFO: Pod daemon-set-jgh54 is not available Apr 17 00:05:17.304: INFO: Wrong image for pod: daemon-set-nxrj8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 17 00:05:17.308: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:05:18.304: INFO: Wrong image for pod: daemon-set-jgh54. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 17 00:05:18.304: INFO: Pod daemon-set-jgh54 is not available Apr 17 00:05:18.304: INFO: Wrong image for pod: daemon-set-nxrj8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 17 00:05:18.308: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:05:19.304: INFO: Wrong image for pod: daemon-set-jgh54. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 17 00:05:19.304: INFO: Pod daemon-set-jgh54 is not available Apr 17 00:05:19.304: INFO: Wrong image for pod: daemon-set-nxrj8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 17 00:05:19.309: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:05:20.304: INFO: Wrong image for pod: daemon-set-jgh54. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 17 00:05:20.304: INFO: Pod daemon-set-jgh54 is not available Apr 17 00:05:20.304: INFO: Wrong image for pod: daemon-set-nxrj8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 17 00:05:20.308: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:05:21.306: INFO: Wrong image for pod: daemon-set-jgh54. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 17 00:05:21.306: INFO: Pod daemon-set-jgh54 is not available Apr 17 00:05:21.306: INFO: Wrong image for pod: daemon-set-nxrj8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 17 00:05:21.310: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:05:22.304: INFO: Wrong image for pod: daemon-set-jgh54. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 17 00:05:22.304: INFO: Pod daemon-set-jgh54 is not available Apr 17 00:05:22.304: INFO: Wrong image for pod: daemon-set-nxrj8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 17 00:05:22.314: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:05:23.304: INFO: Pod daemon-set-9xkgf is not available Apr 17 00:05:23.304: INFO: Wrong image for pod: daemon-set-nxrj8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 17 00:05:23.308: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:05:24.304: INFO: Pod daemon-set-9xkgf is not available Apr 17 00:05:24.304: INFO: Wrong image for pod: daemon-set-nxrj8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 17 00:05:24.308: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:05:25.304: INFO: Pod daemon-set-9xkgf is not available Apr 17 00:05:25.304: INFO: Wrong image for pod: daemon-set-nxrj8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 17 00:05:25.308: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:05:26.303: INFO: Wrong image for pod: daemon-set-nxrj8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 17 00:05:26.307: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:05:27.308: INFO: Wrong image for pod: daemon-set-nxrj8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 17 00:05:27.308: INFO: Pod daemon-set-nxrj8 is not available Apr 17 00:05:27.312: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:05:28.304: INFO: Wrong image for pod: daemon-set-nxrj8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 17 00:05:28.304: INFO: Pod daemon-set-nxrj8 is not available Apr 17 00:05:28.308: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:05:29.303: INFO: Wrong image for pod: daemon-set-nxrj8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 17 00:05:29.303: INFO: Pod daemon-set-nxrj8 is not available Apr 17 00:05:29.307: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:05:30.304: INFO: Wrong image for pod: daemon-set-nxrj8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 17 00:05:30.304: INFO: Pod daemon-set-nxrj8 is not available Apr 17 00:05:30.308: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:05:31.308: INFO: Wrong image for pod: daemon-set-nxrj8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 17 00:05:31.308: INFO: Pod daemon-set-nxrj8 is not available Apr 17 00:05:31.312: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:05:32.303: INFO: Wrong image for pod: daemon-set-nxrj8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 17 00:05:32.303: INFO: Pod daemon-set-nxrj8 is not available Apr 17 00:05:32.307: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:05:33.302: INFO: Pod daemon-set-bnrvj is not available Apr 17 00:05:33.305: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 17 00:05:33.308: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:05:33.310: INFO: Number of nodes with available pods: 1 Apr 17 00:05:33.310: INFO: Node latest-worker is running more than one daemon pod Apr 17 00:05:34.316: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:05:34.319: INFO: Number of nodes with available pods: 1 Apr 17 00:05:34.319: INFO: Node latest-worker is running more than one daemon pod Apr 17 00:05:35.339: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:05:35.343: INFO: Number of nodes with available pods: 1 Apr 17 00:05:35.343: INFO: Node latest-worker is running more than one daemon pod Apr 17 00:05:36.315: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:05:36.319: INFO: Number of nodes with available pods: 2 Apr 17 00:05:36.319: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1441, will wait for the garbage collector to delete the pods Apr 17 00:05:36.393: INFO: Deleting DaemonSet.extensions daemon-set took: 6.382396ms Apr 17 00:05:36.694: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.240044ms Apr 17 00:05:42.997: INFO: Number of nodes with available pods: 0 Apr 17 00:05:42.997: INFO: Number of running nodes: 0, number of available pods: 0 Apr 17 00:05:42.999: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1441/daemonsets","resourceVersion":"8663554"},"items":null} Apr 17 00:05:43.002: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1441/pods","resourceVersion":"8663554"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:05:43.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1441" for this suite. • [SLOW TEST:36.939 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":86,"skipped":1596,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:05:43.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 17 00:05:46.145: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:05:46.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-393" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":87,"skipped":1609,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:05:46.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:05:57.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7425" for this suite. • [SLOW TEST:11.090 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":88,"skipped":1624,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:05:57.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:05:57.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4776" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":89,"skipped":1690,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:05:57.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 00:05:57.477: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:06:01.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3365" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":90,"skipped":1695,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:06:01.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 17 00:06:02.099: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 17 00:06:04.109: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722678762, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722678762, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722678762, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722678762, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 17 00:06:07.137: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:06:07.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2469" for this suite. STEP: Destroying namespace "webhook-2469-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.594 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":91,"skipped":1716,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:06:07.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 17 00:06:08.639: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 17 00:06:10.746: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722678768, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722678768, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722678768, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722678768, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 17 00:06:13.784: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:06:13.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3084" for this suite. STEP: Destroying namespace "webhook-3084-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.806 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":92,"skipped":1720,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:06:14.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 17 00:06:17.212: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:06:17.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2642" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":93,"skipped":1742,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:06:17.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Apr 17 00:06:17.301: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9746' Apr 17 00:06:17.695: INFO: stderr: "" Apr 17 00:06:17.695: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 17 00:06:17.695: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9746' Apr 17 00:06:17.810: INFO: stderr: "" Apr 17 00:06:17.810: INFO: stdout: "update-demo-nautilus-cwq86 update-demo-nautilus-x2vhm " Apr 17 00:06:17.810: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cwq86 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9746' Apr 17 00:06:17.901: INFO: stderr: "" Apr 17 00:06:17.901: INFO: stdout: "" Apr 17 00:06:17.901: INFO: update-demo-nautilus-cwq86 is created but not running Apr 17 00:06:22.902: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9746' Apr 17 00:06:23.014: INFO: stderr: "" Apr 17 00:06:23.014: INFO: stdout: "update-demo-nautilus-cwq86 update-demo-nautilus-x2vhm " Apr 17 00:06:23.014: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cwq86 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9746' Apr 17 00:06:23.120: INFO: stderr: "" Apr 17 00:06:23.120: INFO: stdout: "true" Apr 17 00:06:23.120: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cwq86 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9746' Apr 17 00:06:23.215: INFO: stderr: "" Apr 17 00:06:23.215: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 17 00:06:23.215: INFO: validating pod update-demo-nautilus-cwq86 Apr 17 00:06:23.219: INFO: got data: { "image": "nautilus.jpg" } Apr 17 00:06:23.219: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 17 00:06:23.219: INFO: update-demo-nautilus-cwq86 is verified up and running Apr 17 00:06:23.219: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x2vhm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9746' Apr 17 00:06:23.313: INFO: stderr: "" Apr 17 00:06:23.313: INFO: stdout: "true" Apr 17 00:06:23.313: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x2vhm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9746' Apr 17 00:06:23.417: INFO: stderr: "" Apr 17 00:06:23.417: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 17 00:06:23.417: INFO: validating pod update-demo-nautilus-x2vhm Apr 17 00:06:23.421: INFO: got data: { "image": "nautilus.jpg" } Apr 17 00:06:23.421: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 17 00:06:23.421: INFO: update-demo-nautilus-x2vhm is verified up and running STEP: scaling down the replication controller Apr 17 00:06:23.424: INFO: scanned /root for discovery docs: Apr 17 00:06:23.424: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-9746' Apr 17 00:06:24.543: INFO: stderr: "" Apr 17 00:06:24.543: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 17 00:06:24.543: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9746' Apr 17 00:06:24.637: INFO: stderr: "" Apr 17 00:06:24.637: INFO: stdout: "update-demo-nautilus-cwq86 update-demo-nautilus-x2vhm " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 17 00:06:29.637: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9746' Apr 17 00:06:29.735: INFO: stderr: "" Apr 17 00:06:29.735: INFO: stdout: "update-demo-nautilus-cwq86 update-demo-nautilus-x2vhm " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 17 00:06:34.735: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9746' Apr 17 00:06:34.819: INFO: stderr: "" Apr 17 00:06:34.819: INFO: stdout: "update-demo-nautilus-cwq86 " Apr 17 00:06:34.819: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cwq86 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9746' Apr 17 00:06:34.910: INFO: stderr: "" Apr 17 00:06:34.910: INFO: stdout: "true" Apr 17 00:06:34.910: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cwq86 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9746' Apr 17 00:06:34.999: INFO: stderr: "" Apr 17 00:06:34.999: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 17 00:06:34.999: INFO: validating pod update-demo-nautilus-cwq86 Apr 17 00:06:35.002: INFO: got data: { "image": "nautilus.jpg" } Apr 17 00:06:35.002: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 17 00:06:35.002: INFO: update-demo-nautilus-cwq86 is verified up and running STEP: scaling up the replication controller Apr 17 00:06:35.003: INFO: scanned /root for discovery docs: Apr 17 00:06:35.003: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-9746' Apr 17 00:06:36.138: INFO: stderr: "" Apr 17 00:06:36.138: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 17 00:06:36.138: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9746' Apr 17 00:06:36.235: INFO: stderr: "" Apr 17 00:06:36.235: INFO: stdout: "update-demo-nautilus-cwq86 update-demo-nautilus-nc9td " Apr 17 00:06:36.235: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cwq86 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9746' Apr 17 00:06:36.329: INFO: stderr: "" Apr 17 00:06:36.329: INFO: stdout: "true" Apr 17 00:06:36.329: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cwq86 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9746' Apr 17 00:06:36.421: INFO: stderr: "" Apr 17 00:06:36.421: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 17 00:06:36.421: INFO: validating pod update-demo-nautilus-cwq86 Apr 17 00:06:36.459: INFO: got data: { "image": "nautilus.jpg" } Apr 17 00:06:36.459: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 17 00:06:36.459: INFO: update-demo-nautilus-cwq86 is verified up and running Apr 17 00:06:36.459: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nc9td -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9746' Apr 17 00:06:36.546: INFO: stderr: "" Apr 17 00:06:36.546: INFO: stdout: "" Apr 17 00:06:36.546: INFO: update-demo-nautilus-nc9td is created but not running Apr 17 00:06:41.546: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9746' Apr 17 00:06:41.646: INFO: stderr: "" Apr 17 00:06:41.646: INFO: stdout: "update-demo-nautilus-cwq86 update-demo-nautilus-nc9td " Apr 17 00:06:41.646: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cwq86 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9746' Apr 17 00:06:41.753: INFO: stderr: "" Apr 17 00:06:41.753: INFO: stdout: "true" Apr 17 00:06:41.753: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cwq86 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9746' Apr 17 00:06:41.850: INFO: stderr: "" Apr 17 00:06:41.850: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 17 00:06:41.850: INFO: validating pod update-demo-nautilus-cwq86 Apr 17 00:06:41.854: INFO: got data: { "image": "nautilus.jpg" } Apr 17 00:06:41.854: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 17 00:06:41.854: INFO: update-demo-nautilus-cwq86 is verified up and running Apr 17 00:06:41.854: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nc9td -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9746' Apr 17 00:06:41.947: INFO: stderr: "" Apr 17 00:06:41.947: INFO: stdout: "true" Apr 17 00:06:41.947: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nc9td -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9746' Apr 17 00:06:42.059: INFO: stderr: "" Apr 17 00:06:42.059: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 17 00:06:42.059: INFO: validating pod update-demo-nautilus-nc9td Apr 17 00:06:42.077: INFO: got data: { "image": "nautilus.jpg" } Apr 17 00:06:42.077: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 17 00:06:42.077: INFO: update-demo-nautilus-nc9td is verified up and running STEP: using delete to clean up resources Apr 17 00:06:42.077: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9746' Apr 17 00:06:42.178: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 17 00:06:42.178: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 17 00:06:42.178: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9746' Apr 17 00:06:42.277: INFO: stderr: "No resources found in kubectl-9746 namespace.\n" Apr 17 00:06:42.277: INFO: stdout: "" Apr 17 00:06:42.277: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9746 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 17 00:06:42.382: INFO: stderr: "" Apr 17 00:06:42.383: INFO: stdout: "update-demo-nautilus-cwq86\nupdate-demo-nautilus-nc9td\n" Apr 17 00:06:42.883: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9746' Apr 17 00:06:42.982: INFO: stderr: "No resources found in kubectl-9746 namespace.\n" Apr 17 00:06:42.982: INFO: stdout: "" Apr 17 00:06:42.982: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9746 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 17 00:06:43.071: INFO: stderr: "" Apr 17 00:06:43.071: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:06:43.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9746" for this suite. • [SLOW TEST:25.808 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":275,"completed":94,"skipped":1744,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:06:43.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Apr 17 00:06:43.306: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Apr 17 00:06:53.910: INFO: >>> kubeConfig: /root/.kube/config Apr 17 00:06:56.814: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:07:07.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5274" for this suite. • [SLOW TEST:24.335 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":95,"skipped":1774,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:07:07.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2269.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2269.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2269.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2269.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 17 00:07:13.532: INFO: DNS probes using dns-test-db015b9e-8b25-44c3-b57e-643bbb4ac635 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2269.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2269.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2269.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2269.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 17 00:07:19.607: INFO: File wheezy_udp@dns-test-service-3.dns-2269.svc.cluster.local from pod dns-2269/dns-test-dd97ea17-0b91-40ca-8850-eba7436f4940 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 17 00:07:19.610: INFO: File jessie_udp@dns-test-service-3.dns-2269.svc.cluster.local from pod dns-2269/dns-test-dd97ea17-0b91-40ca-8850-eba7436f4940 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 17 00:07:19.610: INFO: Lookups using dns-2269/dns-test-dd97ea17-0b91-40ca-8850-eba7436f4940 failed for: [wheezy_udp@dns-test-service-3.dns-2269.svc.cluster.local jessie_udp@dns-test-service-3.dns-2269.svc.cluster.local] Apr 17 00:07:24.615: INFO: File wheezy_udp@dns-test-service-3.dns-2269.svc.cluster.local from pod dns-2269/dns-test-dd97ea17-0b91-40ca-8850-eba7436f4940 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 17 00:07:24.618: INFO: File jessie_udp@dns-test-service-3.dns-2269.svc.cluster.local from pod dns-2269/dns-test-dd97ea17-0b91-40ca-8850-eba7436f4940 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 17 00:07:24.618: INFO: Lookups using dns-2269/dns-test-dd97ea17-0b91-40ca-8850-eba7436f4940 failed for: [wheezy_udp@dns-test-service-3.dns-2269.svc.cluster.local jessie_udp@dns-test-service-3.dns-2269.svc.cluster.local] Apr 17 00:07:29.615: INFO: File wheezy_udp@dns-test-service-3.dns-2269.svc.cluster.local from pod dns-2269/dns-test-dd97ea17-0b91-40ca-8850-eba7436f4940 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 17 00:07:29.618: INFO: File jessie_udp@dns-test-service-3.dns-2269.svc.cluster.local from pod dns-2269/dns-test-dd97ea17-0b91-40ca-8850-eba7436f4940 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 17 00:07:29.618: INFO: Lookups using dns-2269/dns-test-dd97ea17-0b91-40ca-8850-eba7436f4940 failed for: [wheezy_udp@dns-test-service-3.dns-2269.svc.cluster.local jessie_udp@dns-test-service-3.dns-2269.svc.cluster.local] Apr 17 00:07:34.615: INFO: File wheezy_udp@dns-test-service-3.dns-2269.svc.cluster.local from pod dns-2269/dns-test-dd97ea17-0b91-40ca-8850-eba7436f4940 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 17 00:07:34.620: INFO: File jessie_udp@dns-test-service-3.dns-2269.svc.cluster.local from pod dns-2269/dns-test-dd97ea17-0b91-40ca-8850-eba7436f4940 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 17 00:07:34.620: INFO: Lookups using dns-2269/dns-test-dd97ea17-0b91-40ca-8850-eba7436f4940 failed for: [wheezy_udp@dns-test-service-3.dns-2269.svc.cluster.local jessie_udp@dns-test-service-3.dns-2269.svc.cluster.local] Apr 17 00:07:39.614: INFO: File wheezy_udp@dns-test-service-3.dns-2269.svc.cluster.local from pod dns-2269/dns-test-dd97ea17-0b91-40ca-8850-eba7436f4940 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 17 00:07:39.618: INFO: File jessie_udp@dns-test-service-3.dns-2269.svc.cluster.local from pod dns-2269/dns-test-dd97ea17-0b91-40ca-8850-eba7436f4940 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 17 00:07:39.618: INFO: Lookups using dns-2269/dns-test-dd97ea17-0b91-40ca-8850-eba7436f4940 failed for: [wheezy_udp@dns-test-service-3.dns-2269.svc.cluster.local jessie_udp@dns-test-service-3.dns-2269.svc.cluster.local] Apr 17 00:07:44.617: INFO: DNS probes using dns-test-dd97ea17-0b91-40ca-8850-eba7436f4940 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2269.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-2269.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2269.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-2269.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 17 00:07:51.303: INFO: DNS probes using dns-test-431309f0-7803-44e3-aec4-837d3f54c5c6 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:07:51.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2269" for this suite. • [SLOW TEST:43.994 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":96,"skipped":1816,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:07:51.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:07:51.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8127" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":275,"completed":97,"skipped":1838,"failed":0} S ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:07:51.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4586.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4586.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 17 00:07:57.935: INFO: DNS probes using dns-4586/dns-test-7f808a5c-e0e5-4988-bf24-b34442b51579 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:07:57.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4586" for this suite. • [SLOW TEST:6.358 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":275,"completed":98,"skipped":1839,"failed":0} SSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:07:58.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Apr 17 00:07:58.254: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Apr 17 00:07:58.258: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Apr 17 00:07:58.258: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Apr 17 00:07:58.269: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Apr 17 00:07:58.270: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Apr 17 00:07:58.415: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Apr 17 00:07:58.415: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Apr 17 00:08:05.966: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:08:05.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-4494" for this suite. • [SLOW TEST:7.983 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":99,"skipped":1843,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:08:05.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-8973 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 17 00:08:06.087: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 17 00:08:06.110: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 17 00:08:08.256: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 17 00:08:10.114: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 17 00:08:12.114: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 00:08:14.114: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 00:08:16.114: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 00:08:18.114: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 00:08:20.114: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 00:08:22.114: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 17 00:08:22.119: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 17 00:08:24.123: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 17 00:08:28.182: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.212 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8973 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 17 00:08:28.182: INFO: >>> kubeConfig: /root/.kube/config I0417 00:08:28.219907 7 log.go:172] (0xc0035104d0) (0xc001ce3b80) Create stream I0417 00:08:28.219947 7 log.go:172] (0xc0035104d0) (0xc001ce3b80) Stream added, broadcasting: 1 I0417 00:08:28.221710 7 log.go:172] (0xc0035104d0) Reply frame received for 1 I0417 00:08:28.221734 7 log.go:172] (0xc0035104d0) (0xc0016c5180) Create stream I0417 00:08:28.221740 7 log.go:172] (0xc0035104d0) (0xc0016c5180) Stream added, broadcasting: 3 I0417 00:08:28.222671 7 log.go:172] (0xc0035104d0) Reply frame received for 3 I0417 00:08:28.222724 7 log.go:172] (0xc0035104d0) (0xc00133df40) Create stream I0417 00:08:28.222742 7 log.go:172] (0xc0035104d0) (0xc00133df40) Stream added, broadcasting: 5 I0417 00:08:28.223709 7 log.go:172] (0xc0035104d0) Reply frame received for 5 I0417 00:08:29.294711 7 log.go:172] (0xc0035104d0) Data frame received for 3 I0417 00:08:29.294828 7 log.go:172] (0xc0016c5180) (3) Data frame handling I0417 00:08:29.294857 7 log.go:172] (0xc0016c5180) (3) Data frame sent I0417 00:08:29.294870 7 log.go:172] (0xc0035104d0) Data frame received for 3 I0417 00:08:29.294887 7 log.go:172] (0xc0016c5180) (3) Data frame handling I0417 00:08:29.294944 7 log.go:172] (0xc0035104d0) Data frame received for 5 I0417 00:08:29.295012 7 log.go:172] (0xc00133df40) (5) Data frame handling I0417 00:08:29.297427 7 log.go:172] (0xc0035104d0) Data frame received for 1 I0417 00:08:29.297464 7 log.go:172] (0xc001ce3b80) (1) Data frame handling I0417 00:08:29.297487 7 log.go:172] (0xc001ce3b80) (1) Data frame sent I0417 00:08:29.297526 7 log.go:172] (0xc0035104d0) (0xc001ce3b80) Stream removed, broadcasting: 1 I0417 00:08:29.297568 7 log.go:172] (0xc0035104d0) Go away received I0417 00:08:29.297710 7 log.go:172] (0xc0035104d0) (0xc001ce3b80) Stream removed, broadcasting: 1 I0417 00:08:29.297741 7 log.go:172] (0xc0035104d0) (0xc0016c5180) Stream removed, broadcasting: 3 I0417 00:08:29.297762 7 log.go:172] (0xc0035104d0) (0xc00133df40) Stream removed, broadcasting: 5 Apr 17 00:08:29.297: INFO: Found all expected endpoints: [netserver-0] Apr 17 00:08:29.301: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.187 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8973 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 17 00:08:29.302: INFO: >>> kubeConfig: /root/.kube/config I0417 00:08:29.339066 7 log.go:172] (0xc0030813f0) (0xc0016c5b80) Create stream I0417 00:08:29.339104 7 log.go:172] (0xc0030813f0) (0xc0016c5b80) Stream added, broadcasting: 1 I0417 00:08:29.341054 7 log.go:172] (0xc0030813f0) Reply frame received for 1 I0417 00:08:29.341094 7 log.go:172] (0xc0030813f0) (0xc001a8f7c0) Create stream I0417 00:08:29.341245 7 log.go:172] (0xc0030813f0) (0xc001a8f7c0) Stream added, broadcasting: 3 I0417 00:08:29.342344 7 log.go:172] (0xc0030813f0) Reply frame received for 3 I0417 00:08:29.342391 7 log.go:172] (0xc0030813f0) (0xc0016c5cc0) Create stream I0417 00:08:29.342408 7 log.go:172] (0xc0030813f0) (0xc0016c5cc0) Stream added, broadcasting: 5 I0417 00:08:29.343652 7 log.go:172] (0xc0030813f0) Reply frame received for 5 I0417 00:08:30.434887 7 log.go:172] (0xc0030813f0) Data frame received for 3 I0417 00:08:30.434942 7 log.go:172] (0xc001a8f7c0) (3) Data frame handling I0417 00:08:30.434969 7 log.go:172] (0xc001a8f7c0) (3) Data frame sent I0417 00:08:30.435243 7 log.go:172] (0xc0030813f0) Data frame received for 5 I0417 00:08:30.435288 7 log.go:172] (0xc0016c5cc0) (5) Data frame handling I0417 00:08:30.435483 7 log.go:172] (0xc0030813f0) Data frame received for 3 I0417 00:08:30.435510 7 log.go:172] (0xc001a8f7c0) (3) Data frame handling I0417 00:08:30.437359 7 log.go:172] (0xc0030813f0) Data frame received for 1 I0417 00:08:30.437384 7 log.go:172] (0xc0016c5b80) (1) Data frame handling I0417 00:08:30.437397 7 log.go:172] (0xc0016c5b80) (1) Data frame sent I0417 00:08:30.437488 7 log.go:172] (0xc0030813f0) (0xc0016c5b80) Stream removed, broadcasting: 1 I0417 00:08:30.437647 7 log.go:172] (0xc0030813f0) (0xc0016c5b80) Stream removed, broadcasting: 1 I0417 00:08:30.437667 7 log.go:172] (0xc0030813f0) (0xc001a8f7c0) Stream removed, broadcasting: 3 I0417 00:08:30.437780 7 log.go:172] (0xc0030813f0) Go away received I0417 00:08:30.437845 7 log.go:172] (0xc0030813f0) (0xc0016c5cc0) Stream removed, broadcasting: 5 Apr 17 00:08:30.437: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:08:30.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8973" for this suite. • [SLOW TEST:24.450 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":100,"skipped":1913,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:08:30.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 17 00:08:30.556: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1a85f25b-6ee5-4b4a-8f71-8bd070d3eeef" in namespace "projected-2138" to be "Succeeded or Failed" Apr 17 00:08:30.575: INFO: Pod "downwardapi-volume-1a85f25b-6ee5-4b4a-8f71-8bd070d3eeef": Phase="Pending", Reason="", readiness=false. Elapsed: 19.227337ms Apr 17 00:08:32.579: INFO: Pod "downwardapi-volume-1a85f25b-6ee5-4b4a-8f71-8bd070d3eeef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023124779s Apr 17 00:08:34.583: INFO: Pod "downwardapi-volume-1a85f25b-6ee5-4b4a-8f71-8bd070d3eeef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027628136s STEP: Saw pod success Apr 17 00:08:34.584: INFO: Pod "downwardapi-volume-1a85f25b-6ee5-4b4a-8f71-8bd070d3eeef" satisfied condition "Succeeded or Failed" Apr 17 00:08:34.587: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-1a85f25b-6ee5-4b4a-8f71-8bd070d3eeef container client-container: STEP: delete the pod Apr 17 00:08:34.636: INFO: Waiting for pod downwardapi-volume-1a85f25b-6ee5-4b4a-8f71-8bd070d3eeef to disappear Apr 17 00:08:34.663: INFO: Pod downwardapi-volume-1a85f25b-6ee5-4b4a-8f71-8bd070d3eeef no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:08:34.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2138" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":101,"skipped":1927,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:08:34.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 17 00:08:34.729: INFO: Waiting up to 5m0s for pod "downward-api-dcbbc127-992f-4654-9e0f-82c608c70ed0" in namespace "downward-api-5689" to be "Succeeded or Failed" Apr 17 00:08:34.733: INFO: Pod "downward-api-dcbbc127-992f-4654-9e0f-82c608c70ed0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.640359ms Apr 17 00:08:36.784: INFO: Pod "downward-api-dcbbc127-992f-4654-9e0f-82c608c70ed0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054462255s Apr 17 00:08:38.788: INFO: Pod "downward-api-dcbbc127-992f-4654-9e0f-82c608c70ed0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058919479s STEP: Saw pod success Apr 17 00:08:38.788: INFO: Pod "downward-api-dcbbc127-992f-4654-9e0f-82c608c70ed0" satisfied condition "Succeeded or Failed" Apr 17 00:08:38.791: INFO: Trying to get logs from node latest-worker2 pod downward-api-dcbbc127-992f-4654-9e0f-82c608c70ed0 container dapi-container: STEP: delete the pod Apr 17 00:08:38.839: INFO: Waiting for pod downward-api-dcbbc127-992f-4654-9e0f-82c608c70ed0 to disappear Apr 17 00:08:38.852: INFO: Pod downward-api-dcbbc127-992f-4654-9e0f-82c608c70ed0 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:08:38.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5689" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":102,"skipped":1940,"failed":0} ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:08:38.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:08:45.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9819" for this suite. STEP: Destroying namespace "nsdeletetest-7543" for this suite. Apr 17 00:08:45.319: INFO: Namespace nsdeletetest-7543 was already deleted STEP: Destroying namespace "nsdeletetest-4941" for this suite. • [SLOW TEST:6.464 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":103,"skipped":1940,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:08:45.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 00:08:45.507: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-a048c05b-0e1a-45c9-9aa0-050bc85f0450" in namespace "security-context-test-5136" to be "Succeeded or Failed" Apr 17 00:08:45.521: INFO: Pod "alpine-nnp-false-a048c05b-0e1a-45c9-9aa0-050bc85f0450": Phase="Pending", Reason="", readiness=false. Elapsed: 14.307413ms Apr 17 00:08:47.525: INFO: Pod "alpine-nnp-false-a048c05b-0e1a-45c9-9aa0-050bc85f0450": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018226802s Apr 17 00:08:49.529: INFO: Pod "alpine-nnp-false-a048c05b-0e1a-45c9-9aa0-050bc85f0450": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022629228s Apr 17 00:08:49.529: INFO: Pod "alpine-nnp-false-a048c05b-0e1a-45c9-9aa0-050bc85f0450" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:08:49.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5136" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":104,"skipped":1966,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:08:49.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating all guestbook components Apr 17 00:08:49.625: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Apr 17 00:08:49.625: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9327' Apr 17 00:08:49.957: INFO: stderr: "" Apr 17 00:08:49.957: INFO: stdout: "service/agnhost-slave created\n" Apr 17 00:08:49.958: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Apr 17 00:08:49.958: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9327' Apr 17 00:08:50.254: INFO: stderr: "" Apr 17 00:08:50.254: INFO: stdout: "service/agnhost-master created\n" Apr 17 00:08:50.254: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 17 00:08:50.254: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9327' Apr 17 00:08:50.595: INFO: stderr: "" Apr 17 00:08:50.595: INFO: stdout: "service/frontend created\n" Apr 17 00:08:50.595: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Apr 17 00:08:50.595: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9327' Apr 17 00:08:50.900: INFO: stderr: "" Apr 17 00:08:50.900: INFO: stdout: "deployment.apps/frontend created\n" Apr 17 00:08:50.900: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 17 00:08:50.900: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9327' Apr 17 00:08:51.226: INFO: stderr: "" Apr 17 00:08:51.226: INFO: stdout: "deployment.apps/agnhost-master created\n" Apr 17 00:08:51.227: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 17 00:08:51.227: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9327' Apr 17 00:08:51.473: INFO: stderr: "" Apr 17 00:08:51.473: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Apr 17 00:08:51.473: INFO: Waiting for all frontend pods to be Running. Apr 17 00:09:01.524: INFO: Waiting for frontend to serve content. Apr 17 00:09:01.535: INFO: Trying to add a new entry to the guestbook. Apr 17 00:09:01.546: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Apr 17 00:09:01.555: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9327' Apr 17 00:09:01.750: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 17 00:09:01.750: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Apr 17 00:09:01.751: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9327' Apr 17 00:09:01.863: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 17 00:09:01.863: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 17 00:09:01.863: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9327' Apr 17 00:09:01.995: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 17 00:09:01.995: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 17 00:09:01.995: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9327' Apr 17 00:09:02.097: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 17 00:09:02.097: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 17 00:09:02.097: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9327' Apr 17 00:09:02.210: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 17 00:09:02.210: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 17 00:09:02.210: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9327' Apr 17 00:09:02.313: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 17 00:09:02.313: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:09:02.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9327" for this suite. • [SLOW TEST:12.760 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":275,"completed":105,"skipped":1972,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:09:02.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 17 00:09:03.965: INFO: Pod name wrapped-volume-race-105aa5da-2d6e-4dcb-a085-cb7f73f14078: Found 0 pods out of 5 Apr 17 00:09:08.979: INFO: Pod name wrapped-volume-race-105aa5da-2d6e-4dcb-a085-cb7f73f14078: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-105aa5da-2d6e-4dcb-a085-cb7f73f14078 in namespace emptydir-wrapper-3315, will wait for the garbage collector to delete the pods Apr 17 00:09:21.073: INFO: Deleting ReplicationController wrapped-volume-race-105aa5da-2d6e-4dcb-a085-cb7f73f14078 took: 7.463635ms Apr 17 00:09:21.173: INFO: Terminating ReplicationController wrapped-volume-race-105aa5da-2d6e-4dcb-a085-cb7f73f14078 pods took: 100.212567ms STEP: Creating RC which spawns configmap-volume pods Apr 17 00:09:33.833: INFO: Pod name wrapped-volume-race-27be93c2-4930-49c2-800f-526d8a6d7c46: Found 1 pods out of 5 Apr 17 00:09:38.842: INFO: Pod name wrapped-volume-race-27be93c2-4930-49c2-800f-526d8a6d7c46: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-27be93c2-4930-49c2-800f-526d8a6d7c46 in namespace emptydir-wrapper-3315, will wait for the garbage collector to delete the pods Apr 17 00:09:54.970: INFO: Deleting ReplicationController wrapped-volume-race-27be93c2-4930-49c2-800f-526d8a6d7c46 took: 26.790452ms Apr 17 00:09:55.370: INFO: Terminating ReplicationController wrapped-volume-race-27be93c2-4930-49c2-800f-526d8a6d7c46 pods took: 400.279001ms STEP: Creating RC which spawns configmap-volume pods Apr 17 00:10:03.613: INFO: Pod name wrapped-volume-race-3bba2a3f-89a9-48a6-b2a8-ea59ceed3507: Found 0 pods out of 5 Apr 17 00:10:08.620: INFO: Pod name wrapped-volume-race-3bba2a3f-89a9-48a6-b2a8-ea59ceed3507: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-3bba2a3f-89a9-48a6-b2a8-ea59ceed3507 in namespace emptydir-wrapper-3315, will wait for the garbage collector to delete the pods Apr 17 00:10:20.709: INFO: Deleting ReplicationController wrapped-volume-race-3bba2a3f-89a9-48a6-b2a8-ea59ceed3507 took: 11.667427ms Apr 17 00:10:21.109: INFO: Terminating ReplicationController wrapped-volume-race-3bba2a3f-89a9-48a6-b2a8-ea59ceed3507 pods took: 400.264545ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:10:34.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3315" for this suite. • [SLOW TEST:92.028 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":106,"skipped":1981,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:10:34.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8959 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8959;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8959 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8959;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8959.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8959.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8959.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8959.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8959.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8959.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8959.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8959.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8959.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8959.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8959.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8959.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8959.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 177.60.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.60.177_udp@PTR;check="$$(dig +tcp +noall +answer +search 177.60.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.60.177_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8959 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8959;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8959 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8959;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8959.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8959.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8959.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8959.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8959.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8959.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8959.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8959.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8959.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8959.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8959.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8959.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8959.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 177.60.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.60.177_udp@PTR;check="$$(dig +tcp +noall +answer +search 177.60.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.60.177_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 17 00:10:40.559: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:40.565: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:40.571: INFO: Unable to read wheezy_udp@dns-test-service.dns-8959 from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:40.577: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8959 from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:40.595: INFO: Unable to read wheezy_udp@dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:40.614: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:40.619: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:40.626: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:40.709: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:40.715: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:40.740: INFO: Unable to read jessie_udp@dns-test-service.dns-8959 from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:40.757: INFO: Unable to read jessie_tcp@dns-test-service.dns-8959 from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:40.763: INFO: Unable to read jessie_udp@dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:40.788: INFO: Unable to read jessie_tcp@dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:40.793: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:40.799: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:40.859: INFO: Lookups using dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8959 wheezy_tcp@dns-test-service.dns-8959 wheezy_udp@dns-test-service.dns-8959.svc wheezy_tcp@dns-test-service.dns-8959.svc wheezy_udp@_http._tcp.dns-test-service.dns-8959.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8959.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8959 jessie_tcp@dns-test-service.dns-8959 jessie_udp@dns-test-service.dns-8959.svc jessie_tcp@dns-test-service.dns-8959.svc jessie_udp@_http._tcp.dns-test-service.dns-8959.svc jessie_tcp@_http._tcp.dns-test-service.dns-8959.svc] Apr 17 00:10:45.864: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:45.868: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:45.871: INFO: Unable to read wheezy_udp@dns-test-service.dns-8959 from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:45.875: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8959 from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:45.908: INFO: Unable to read wheezy_udp@dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:45.910: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:45.913: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:45.916: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:45.932: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:45.934: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:45.936: INFO: Unable to read jessie_udp@dns-test-service.dns-8959 from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:45.940: INFO: Unable to read jessie_tcp@dns-test-service.dns-8959 from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:45.942: INFO: Unable to read jessie_udp@dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:45.944: INFO: Unable to read jessie_tcp@dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:45.948: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:45.950: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:45.966: INFO: Lookups using dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8959 wheezy_tcp@dns-test-service.dns-8959 wheezy_udp@dns-test-service.dns-8959.svc wheezy_tcp@dns-test-service.dns-8959.svc wheezy_udp@_http._tcp.dns-test-service.dns-8959.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8959.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8959 jessie_tcp@dns-test-service.dns-8959 jessie_udp@dns-test-service.dns-8959.svc jessie_tcp@dns-test-service.dns-8959.svc jessie_udp@_http._tcp.dns-test-service.dns-8959.svc jessie_tcp@_http._tcp.dns-test-service.dns-8959.svc] Apr 17 00:10:50.864: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:50.867: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:50.875: INFO: Unable to read wheezy_udp@dns-test-service.dns-8959 from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:50.878: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8959 from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:50.881: INFO: Unable to read wheezy_udp@dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:50.883: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:50.886: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:50.888: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:50.907: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:50.910: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:50.913: INFO: Unable to read jessie_udp@dns-test-service.dns-8959 from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:50.916: INFO: Unable to read jessie_tcp@dns-test-service.dns-8959 from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:50.919: INFO: Unable to read jessie_udp@dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:50.922: INFO: Unable to read jessie_tcp@dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:50.925: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:50.928: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:50.948: INFO: Lookups using dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8959 wheezy_tcp@dns-test-service.dns-8959 wheezy_udp@dns-test-service.dns-8959.svc wheezy_tcp@dns-test-service.dns-8959.svc wheezy_udp@_http._tcp.dns-test-service.dns-8959.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8959.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8959 jessie_tcp@dns-test-service.dns-8959 jessie_udp@dns-test-service.dns-8959.svc jessie_tcp@dns-test-service.dns-8959.svc jessie_udp@_http._tcp.dns-test-service.dns-8959.svc jessie_tcp@_http._tcp.dns-test-service.dns-8959.svc] Apr 17 00:10:55.863: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:55.867: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:55.871: INFO: Unable to read wheezy_udp@dns-test-service.dns-8959 from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:55.874: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8959 from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:55.878: INFO: Unable to read wheezy_udp@dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:55.881: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:55.890: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:55.892: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:55.915: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:55.918: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:55.920: INFO: Unable to read jessie_udp@dns-test-service.dns-8959 from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:55.923: INFO: Unable to read jessie_tcp@dns-test-service.dns-8959 from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:55.926: INFO: Unable to read jessie_udp@dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:55.929: INFO: Unable to read jessie_tcp@dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:55.931: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:55.934: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:10:55.952: INFO: Lookups using dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8959 wheezy_tcp@dns-test-service.dns-8959 wheezy_udp@dns-test-service.dns-8959.svc wheezy_tcp@dns-test-service.dns-8959.svc wheezy_udp@_http._tcp.dns-test-service.dns-8959.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8959.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8959 jessie_tcp@dns-test-service.dns-8959 jessie_udp@dns-test-service.dns-8959.svc jessie_tcp@dns-test-service.dns-8959.svc jessie_udp@_http._tcp.dns-test-service.dns-8959.svc jessie_tcp@_http._tcp.dns-test-service.dns-8959.svc] Apr 17 00:11:00.864: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:11:00.868: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:11:00.872: INFO: Unable to read wheezy_udp@dns-test-service.dns-8959 from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:11:00.875: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8959 from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:11:00.878: INFO: Unable to read wheezy_udp@dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:11:00.882: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:11:00.885: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:11:00.889: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:11:00.912: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:11:00.915: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:11:00.917: INFO: Unable to read jessie_udp@dns-test-service.dns-8959 from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:11:00.919: INFO: Unable to read jessie_tcp@dns-test-service.dns-8959 from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:11:00.922: INFO: Unable to read jessie_udp@dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:11:00.923: INFO: Unable to read jessie_tcp@dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:11:00.926: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:11:00.928: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:11:00.943: INFO: Lookups using dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8959 wheezy_tcp@dns-test-service.dns-8959 wheezy_udp@dns-test-service.dns-8959.svc wheezy_tcp@dns-test-service.dns-8959.svc wheezy_udp@_http._tcp.dns-test-service.dns-8959.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8959.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8959 jessie_tcp@dns-test-service.dns-8959 jessie_udp@dns-test-service.dns-8959.svc jessie_tcp@dns-test-service.dns-8959.svc jessie_udp@_http._tcp.dns-test-service.dns-8959.svc jessie_tcp@_http._tcp.dns-test-service.dns-8959.svc] Apr 17 00:11:05.870: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:11:05.873: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:11:05.876: INFO: Unable to read wheezy_udp@dns-test-service.dns-8959 from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:11:05.879: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8959 from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:11:05.882: INFO: Unable to read wheezy_udp@dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:11:05.884: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:11:05.889: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:11:05.892: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:11:05.910: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:11:05.913: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:11:05.915: INFO: Unable to read jessie_udp@dns-test-service.dns-8959 from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:11:05.918: INFO: Unable to read jessie_tcp@dns-test-service.dns-8959 from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:11:05.921: INFO: Unable to read jessie_udp@dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:11:05.924: INFO: Unable to read jessie_tcp@dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:11:05.928: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:11:05.931: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8959.svc from pod dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a: the server could not find the requested resource (get pods dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a) Apr 17 00:11:05.951: INFO: Lookups using dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8959 wheezy_tcp@dns-test-service.dns-8959 wheezy_udp@dns-test-service.dns-8959.svc wheezy_tcp@dns-test-service.dns-8959.svc wheezy_udp@_http._tcp.dns-test-service.dns-8959.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8959.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8959 jessie_tcp@dns-test-service.dns-8959 jessie_udp@dns-test-service.dns-8959.svc jessie_tcp@dns-test-service.dns-8959.svc jessie_udp@_http._tcp.dns-test-service.dns-8959.svc jessie_tcp@_http._tcp.dns-test-service.dns-8959.svc] Apr 17 00:11:10.951: INFO: DNS probes using dns-8959/dns-test-6f61681a-09ce-4402-99ca-0f17171dcb7a succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:11:11.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8959" for this suite. • [SLOW TEST:37.323 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":107,"skipped":1991,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:11:11.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 17 00:11:11.735: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e75e3860-51a7-4063-8669-26b3a7b93e9b" in namespace "projected-1454" to be "Succeeded or Failed" Apr 17 00:11:11.770: INFO: Pod "downwardapi-volume-e75e3860-51a7-4063-8669-26b3a7b93e9b": Phase="Pending", Reason="", readiness=false. Elapsed: 34.747361ms Apr 17 00:11:13.822: INFO: Pod "downwardapi-volume-e75e3860-51a7-4063-8669-26b3a7b93e9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086477996s Apr 17 00:11:15.826: INFO: Pod "downwardapi-volume-e75e3860-51a7-4063-8669-26b3a7b93e9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.090824874s STEP: Saw pod success Apr 17 00:11:15.826: INFO: Pod "downwardapi-volume-e75e3860-51a7-4063-8669-26b3a7b93e9b" satisfied condition "Succeeded or Failed" Apr 17 00:11:15.829: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-e75e3860-51a7-4063-8669-26b3a7b93e9b container client-container: STEP: delete the pod Apr 17 00:11:16.023: INFO: Waiting for pod downwardapi-volume-e75e3860-51a7-4063-8669-26b3a7b93e9b to disappear Apr 17 00:11:16.046: INFO: Pod downwardapi-volume-e75e3860-51a7-4063-8669-26b3a7b93e9b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:11:16.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1454" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":108,"skipped":2001,"failed":0} S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:11:16.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-5134 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 17 00:11:16.119: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 17 00:11:16.205: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 17 00:11:18.208: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 17 00:11:20.208: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 00:11:22.209: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 00:11:24.209: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 00:11:26.208: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 00:11:28.208: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 00:11:30.208: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 00:11:32.219: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 00:11:34.223: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 00:11:36.209: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 17 00:11:36.215: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 17 00:11:40.265: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.199:8080/dial?request=hostname&protocol=http&host=10.244.2.230&port=8080&tries=1'] Namespace:pod-network-test-5134 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 17 00:11:40.265: INFO: >>> kubeConfig: /root/.kube/config I0417 00:11:40.300182 7 log.go:172] (0xc002c6c370) (0xc001a8efa0) Create stream I0417 00:11:40.300217 7 log.go:172] (0xc002c6c370) (0xc001a8efa0) Stream added, broadcasting: 1 I0417 00:11:40.302265 7 log.go:172] (0xc002c6c370) Reply frame received for 1 I0417 00:11:40.302306 7 log.go:172] (0xc002c6c370) (0xc001a8f180) Create stream I0417 00:11:40.302319 7 log.go:172] (0xc002c6c370) (0xc001a8f180) Stream added, broadcasting: 3 I0417 00:11:40.303442 7 log.go:172] (0xc002c6c370) Reply frame received for 3 I0417 00:11:40.303487 7 log.go:172] (0xc002c6c370) (0xc0019ec460) Create stream I0417 00:11:40.303503 7 log.go:172] (0xc002c6c370) (0xc0019ec460) Stream added, broadcasting: 5 I0417 00:11:40.304584 7 log.go:172] (0xc002c6c370) Reply frame received for 5 I0417 00:11:40.398857 7 log.go:172] (0xc002c6c370) Data frame received for 3 I0417 00:11:40.398904 7 log.go:172] (0xc001a8f180) (3) Data frame handling I0417 00:11:40.398932 7 log.go:172] (0xc001a8f180) (3) Data frame sent I0417 00:11:40.399713 7 log.go:172] (0xc002c6c370) Data frame received for 3 I0417 00:11:40.399733 7 log.go:172] (0xc001a8f180) (3) Data frame handling I0417 00:11:40.399762 7 log.go:172] (0xc002c6c370) Data frame received for 5 I0417 00:11:40.399799 7 log.go:172] (0xc0019ec460) (5) Data frame handling I0417 00:11:40.402309 7 log.go:172] (0xc002c6c370) Data frame received for 1 I0417 00:11:40.402345 7 log.go:172] (0xc001a8efa0) (1) Data frame handling I0417 00:11:40.402358 7 log.go:172] (0xc001a8efa0) (1) Data frame sent I0417 00:11:40.402368 7 log.go:172] (0xc002c6c370) (0xc001a8efa0) Stream removed, broadcasting: 1 I0417 00:11:40.402428 7 log.go:172] (0xc002c6c370) (0xc001a8efa0) Stream removed, broadcasting: 1 I0417 00:11:40.402440 7 log.go:172] (0xc002c6c370) (0xc001a8f180) Stream removed, broadcasting: 3 I0417 00:11:40.402530 7 log.go:172] (0xc002c6c370) (0xc0019ec460) Stream removed, broadcasting: 5 I0417 00:11:40.402593 7 log.go:172] (0xc002c6c370) Go away received Apr 17 00:11:40.402: INFO: Waiting for responses: map[] Apr 17 00:11:40.411: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.199:8080/dial?request=hostname&protocol=http&host=10.244.1.198&port=8080&tries=1'] Namespace:pod-network-test-5134 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 17 00:11:40.411: INFO: >>> kubeConfig: /root/.kube/config I0417 00:11:40.441763 7 log.go:172] (0xc002d0a630) (0xc001a5ce60) Create stream I0417 00:11:40.441798 7 log.go:172] (0xc002d0a630) (0xc001a5ce60) Stream added, broadcasting: 1 I0417 00:11:40.443739 7 log.go:172] (0xc002d0a630) Reply frame received for 1 I0417 00:11:40.443790 7 log.go:172] (0xc002d0a630) (0xc001a8f220) Create stream I0417 00:11:40.443806 7 log.go:172] (0xc002d0a630) (0xc001a8f220) Stream added, broadcasting: 3 I0417 00:11:40.444876 7 log.go:172] (0xc002d0a630) Reply frame received for 3 I0417 00:11:40.444919 7 log.go:172] (0xc002d0a630) (0xc001a8f2c0) Create stream I0417 00:11:40.444935 7 log.go:172] (0xc002d0a630) (0xc001a8f2c0) Stream added, broadcasting: 5 I0417 00:11:40.446096 7 log.go:172] (0xc002d0a630) Reply frame received for 5 I0417 00:11:40.494773 7 log.go:172] (0xc002d0a630) Data frame received for 3 I0417 00:11:40.494812 7 log.go:172] (0xc001a8f220) (3) Data frame handling I0417 00:11:40.494844 7 log.go:172] (0xc001a8f220) (3) Data frame sent I0417 00:11:40.495651 7 log.go:172] (0xc002d0a630) Data frame received for 3 I0417 00:11:40.495695 7 log.go:172] (0xc001a8f220) (3) Data frame handling I0417 00:11:40.495793 7 log.go:172] (0xc002d0a630) Data frame received for 5 I0417 00:11:40.495826 7 log.go:172] (0xc001a8f2c0) (5) Data frame handling I0417 00:11:40.497736 7 log.go:172] (0xc002d0a630) Data frame received for 1 I0417 00:11:40.497764 7 log.go:172] (0xc001a5ce60) (1) Data frame handling I0417 00:11:40.497795 7 log.go:172] (0xc001a5ce60) (1) Data frame sent I0417 00:11:40.497834 7 log.go:172] (0xc002d0a630) (0xc001a5ce60) Stream removed, broadcasting: 1 I0417 00:11:40.497869 7 log.go:172] (0xc002d0a630) Go away received I0417 00:11:40.498033 7 log.go:172] (0xc002d0a630) (0xc001a5ce60) Stream removed, broadcasting: 1 I0417 00:11:40.498064 7 log.go:172] (0xc002d0a630) (0xc001a8f220) Stream removed, broadcasting: 3 I0417 00:11:40.498084 7 log.go:172] (0xc002d0a630) (0xc001a8f2c0) Stream removed, broadcasting: 5 Apr 17 00:11:40.498: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:11:40.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5134" for this suite. • [SLOW TEST:24.446 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":109,"skipped":2002,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:11:40.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 17 00:11:45.090: INFO: Successfully updated pod "labelsupdate75dc010a-8428-4957-a8de-27a8009d0f2d" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:11:47.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3648" for this suite. • [SLOW TEST:6.619 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":110,"skipped":2008,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:11:47.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's args Apr 17 00:11:47.378: INFO: Waiting up to 5m0s for pod "var-expansion-b501ae1a-6061-41de-8eb1-bd8a9dcb769a" in namespace "var-expansion-9509" to be "Succeeded or Failed" Apr 17 00:11:47.393: INFO: Pod "var-expansion-b501ae1a-6061-41de-8eb1-bd8a9dcb769a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.917602ms Apr 17 00:11:49.397: INFO: Pod "var-expansion-b501ae1a-6061-41de-8eb1-bd8a9dcb769a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019250909s Apr 17 00:11:51.401: INFO: Pod "var-expansion-b501ae1a-6061-41de-8eb1-bd8a9dcb769a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023289746s STEP: Saw pod success Apr 17 00:11:51.401: INFO: Pod "var-expansion-b501ae1a-6061-41de-8eb1-bd8a9dcb769a" satisfied condition "Succeeded or Failed" Apr 17 00:11:51.404: INFO: Trying to get logs from node latest-worker pod var-expansion-b501ae1a-6061-41de-8eb1-bd8a9dcb769a container dapi-container: STEP: delete the pod Apr 17 00:11:51.424: INFO: Waiting for pod var-expansion-b501ae1a-6061-41de-8eb1-bd8a9dcb769a to disappear Apr 17 00:11:51.429: INFO: Pod var-expansion-b501ae1a-6061-41de-8eb1-bd8a9dcb769a no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:11:51.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9509" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":111,"skipped":2019,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:11:51.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Apr 17 00:11:51.508: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2540' Apr 17 00:11:51.784: INFO: stderr: "" Apr 17 00:11:51.784: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 17 00:11:52.788: INFO: Selector matched 1 pods for map[app:agnhost] Apr 17 00:11:52.788: INFO: Found 0 / 1 Apr 17 00:11:53.788: INFO: Selector matched 1 pods for map[app:agnhost] Apr 17 00:11:53.788: INFO: Found 0 / 1 Apr 17 00:11:54.788: INFO: Selector matched 1 pods for map[app:agnhost] Apr 17 00:11:54.788: INFO: Found 1 / 1 Apr 17 00:11:54.788: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Apr 17 00:11:54.790: INFO: Selector matched 1 pods for map[app:agnhost] Apr 17 00:11:54.790: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 17 00:11:54.790: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config patch pod agnhost-master-8j8rr --namespace=kubectl-2540 -p {"metadata":{"annotations":{"x":"y"}}}' Apr 17 00:11:54.886: INFO: stderr: "" Apr 17 00:11:54.886: INFO: stdout: "pod/agnhost-master-8j8rr patched\n" STEP: checking annotations Apr 17 00:11:54.890: INFO: Selector matched 1 pods for map[app:agnhost] Apr 17 00:11:54.890: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:11:54.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2540" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":275,"completed":112,"skipped":2046,"failed":0} SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:11:54.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 17 00:11:54.963: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 17 00:11:54.974: INFO: Waiting for terminating namespaces to be deleted... Apr 17 00:11:54.976: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 17 00:11:54.982: INFO: agnhost-master-8j8rr from kubectl-2540 started at 2020-04-17 00:11:51 +0000 UTC (1 container statuses recorded) Apr 17 00:11:54.982: INFO: Container agnhost-master ready: true, restart count 0 Apr 17 00:11:54.982: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 17 00:11:54.982: INFO: Container kindnet-cni ready: true, restart count 0 Apr 17 00:11:54.982: INFO: labelsupdate75dc010a-8428-4957-a8de-27a8009d0f2d from projected-3648 started at 2020-04-17 00:11:40 +0000 UTC (1 container statuses recorded) Apr 17 00:11:54.982: INFO: Container client-container ready: false, restart count 0 Apr 17 00:11:54.982: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 17 00:11:54.982: INFO: Container kube-proxy ready: true, restart count 0 Apr 17 00:11:54.982: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 17 00:11:55.008: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 17 00:11:55.008: INFO: Container kindnet-cni ready: true, restart count 0 Apr 17 00:11:55.008: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 17 00:11:55.008: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Apr 17 00:11:55.067: INFO: Pod kindnet-vnjgh requesting resource cpu=100m on Node latest-worker Apr 17 00:11:55.067: INFO: Pod kindnet-zq6gp requesting resource cpu=100m on Node latest-worker2 Apr 17 00:11:55.067: INFO: Pod kube-proxy-c5xlk requesting resource cpu=0m on Node latest-worker2 Apr 17 00:11:55.067: INFO: Pod kube-proxy-s9v6p requesting resource cpu=0m on Node latest-worker Apr 17 00:11:55.067: INFO: Pod agnhost-master-8j8rr requesting resource cpu=0m on Node latest-worker STEP: Starting Pods to consume most of the cluster CPU. Apr 17 00:11:55.067: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 Apr 17 00:11:55.073: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-2c966399-41bd-48c9-b24d-f734e50a10d2.160672f4f682503f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7278/filler-pod-2c966399-41bd-48c9-b24d-f734e50a10d2 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-2c966399-41bd-48c9-b24d-f734e50a10d2.160672f540e3668c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-2c966399-41bd-48c9-b24d-f734e50a10d2.160672f5812f0436], Reason = [Created], Message = [Created container filler-pod-2c966399-41bd-48c9-b24d-f734e50a10d2] STEP: Considering event: Type = [Normal], Name = [filler-pod-2c966399-41bd-48c9-b24d-f734e50a10d2.160672f59c118bc6], Reason = [Started], Message = [Started container filler-pod-2c966399-41bd-48c9-b24d-f734e50a10d2] STEP: Considering event: Type = [Normal], Name = [filler-pod-40ce2ca4-b1b1-4af7-9c97-44fd8a9df0d0.160672f4f6824e86], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7278/filler-pod-40ce2ca4-b1b1-4af7-9c97-44fd8a9df0d0 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-40ce2ca4-b1b1-4af7-9c97-44fd8a9df0d0.160672f585493238], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-40ce2ca4-b1b1-4af7-9c97-44fd8a9df0d0.160672f5b9961546], Reason = [Created], Message = [Created container filler-pod-40ce2ca4-b1b1-4af7-9c97-44fd8a9df0d0] STEP: Considering event: Type = [Normal], Name = [filler-pod-40ce2ca4-b1b1-4af7-9c97-44fd8a9df0d0.160672f5c72e9886], Reason = [Started], Message = [Started container filler-pod-40ce2ca4-b1b1-4af7-9c97-44fd8a9df0d0] STEP: Considering event: Type = [Warning], Name = [additional-pod.160672f5e5d3202c], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.160672f5e8868cc1], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:12:00.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7278" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:5.536 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":275,"completed":113,"skipped":2051,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:12:00.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Apr 17 00:12:04.665: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-2754 PodName:pod-sharedvolume-576dd0bf-5b8f-4292-9400-200d4e77f1fe ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 17 00:12:04.665: INFO: >>> kubeConfig: /root/.kube/config I0417 00:12:04.697963 7 log.go:172] (0xc003080210) (0xc0014fda40) Create stream I0417 00:12:04.698017 7 log.go:172] (0xc003080210) (0xc0014fda40) Stream added, broadcasting: 1 I0417 00:12:04.700481 7 log.go:172] (0xc003080210) Reply frame received for 1 I0417 00:12:04.700535 7 log.go:172] (0xc003080210) (0xc0011a32c0) Create stream I0417 00:12:04.700552 7 log.go:172] (0xc003080210) (0xc0011a32c0) Stream added, broadcasting: 3 I0417 00:12:04.701829 7 log.go:172] (0xc003080210) Reply frame received for 3 I0417 00:12:04.701887 7 log.go:172] (0xc003080210) (0xc0011a3540) Create stream I0417 00:12:04.701906 7 log.go:172] (0xc003080210) (0xc0011a3540) Stream added, broadcasting: 5 I0417 00:12:04.702995 7 log.go:172] (0xc003080210) Reply frame received for 5 I0417 00:12:04.765645 7 log.go:172] (0xc003080210) Data frame received for 3 I0417 00:12:04.765761 7 log.go:172] (0xc0011a32c0) (3) Data frame handling I0417 00:12:04.765792 7 log.go:172] (0xc0011a32c0) (3) Data frame sent I0417 00:12:04.765810 7 log.go:172] (0xc003080210) Data frame received for 3 I0417 00:12:04.765868 7 log.go:172] (0xc0011a32c0) (3) Data frame handling I0417 00:12:04.765915 7 log.go:172] (0xc003080210) Data frame received for 5 I0417 00:12:04.765937 7 log.go:172] (0xc0011a3540) (5) Data frame handling I0417 00:12:04.767551 7 log.go:172] (0xc003080210) Data frame received for 1 I0417 00:12:04.767571 7 log.go:172] (0xc0014fda40) (1) Data frame handling I0417 00:12:04.767585 7 log.go:172] (0xc0014fda40) (1) Data frame sent I0417 00:12:04.767610 7 log.go:172] (0xc003080210) (0xc0014fda40) Stream removed, broadcasting: 1 I0417 00:12:04.767634 7 log.go:172] (0xc003080210) Go away received I0417 00:12:04.767774 7 log.go:172] (0xc003080210) (0xc0014fda40) Stream removed, broadcasting: 1 I0417 00:12:04.767801 7 log.go:172] (0xc003080210) (0xc0011a32c0) Stream removed, broadcasting: 3 I0417 00:12:04.767820 7 log.go:172] (0xc003080210) (0xc0011a3540) Stream removed, broadcasting: 5 Apr 17 00:12:04.767: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:12:04.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2754" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":114,"skipped":2052,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:12:04.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 17 00:12:05.541: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 17 00:12:07.582: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722679125, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722679125, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722679125, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722679125, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 17 00:12:10.631: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 00:12:10.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9781-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:12:11.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9395" for this suite. STEP: Destroying namespace "webhook-9395-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.043 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":115,"skipped":2057,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:12:11.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-7765d869-7483-4ef5-9472-1e6e8da07d4f STEP: Creating a pod to test consume configMaps Apr 17 00:12:11.995: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-29540359-3764-4c16-b538-019211571c91" in namespace "projected-9757" to be "Succeeded or Failed" Apr 17 00:12:12.278: INFO: Pod "pod-projected-configmaps-29540359-3764-4c16-b538-019211571c91": Phase="Pending", Reason="", readiness=false. Elapsed: 283.484052ms Apr 17 00:12:14.285: INFO: Pod "pod-projected-configmaps-29540359-3764-4c16-b538-019211571c91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290462569s Apr 17 00:12:16.288: INFO: Pod "pod-projected-configmaps-29540359-3764-4c16-b538-019211571c91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.293386318s STEP: Saw pod success Apr 17 00:12:16.288: INFO: Pod "pod-projected-configmaps-29540359-3764-4c16-b538-019211571c91" satisfied condition "Succeeded or Failed" Apr 17 00:12:16.291: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-29540359-3764-4c16-b538-019211571c91 container projected-configmap-volume-test: STEP: delete the pod Apr 17 00:12:16.311: INFO: Waiting for pod pod-projected-configmaps-29540359-3764-4c16-b538-019211571c91 to disappear Apr 17 00:12:16.316: INFO: Pod pod-projected-configmaps-29540359-3764-4c16-b538-019211571c91 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:12:16.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9757" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":116,"skipped":2071,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:12:16.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test hostPath mode Apr 17 00:12:16.396: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-136" to be "Succeeded or Failed" Apr 17 00:12:16.400: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 3.893462ms Apr 17 00:12:18.405: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00911863s Apr 17 00:12:20.410: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013713988s STEP: Saw pod success Apr 17 00:12:20.410: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Apr 17 00:12:20.414: INFO: Trying to get logs from node latest-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Apr 17 00:12:20.443: INFO: Waiting for pod pod-host-path-test to disappear Apr 17 00:12:20.475: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:12:20.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-136" for this suite. •{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":117,"skipped":2081,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:12:20.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-0003498f-ffce-4d42-83e5-2056b93d3abb STEP: Creating a pod to test consume secrets Apr 17 00:12:20.564: INFO: Waiting up to 5m0s for pod "pod-secrets-279c1fac-3577-4405-b2d6-8701a3e161f5" in namespace "secrets-7856" to be "Succeeded or Failed" Apr 17 00:12:20.631: INFO: Pod "pod-secrets-279c1fac-3577-4405-b2d6-8701a3e161f5": Phase="Pending", Reason="", readiness=false. Elapsed: 67.397465ms Apr 17 00:12:22.635: INFO: Pod "pod-secrets-279c1fac-3577-4405-b2d6-8701a3e161f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071484938s Apr 17 00:12:24.640: INFO: Pod "pod-secrets-279c1fac-3577-4405-b2d6-8701a3e161f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075826774s STEP: Saw pod success Apr 17 00:12:24.640: INFO: Pod "pod-secrets-279c1fac-3577-4405-b2d6-8701a3e161f5" satisfied condition "Succeeded or Failed" Apr 17 00:12:24.642: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-279c1fac-3577-4405-b2d6-8701a3e161f5 container secret-volume-test: STEP: delete the pod Apr 17 00:12:24.665: INFO: Waiting for pod pod-secrets-279c1fac-3577-4405-b2d6-8701a3e161f5 to disappear Apr 17 00:12:24.727: INFO: Pod pod-secrets-279c1fac-3577-4405-b2d6-8701a3e161f5 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:12:24.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7856" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":118,"skipped":2135,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:12:24.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-aeaf2dd8-12f9-4f21-a078-e0e6b64f76a9 in namespace container-probe-9976 Apr 17 00:12:28.793: INFO: Started pod liveness-aeaf2dd8-12f9-4f21-a078-e0e6b64f76a9 in namespace container-probe-9976 STEP: checking the pod's current state and verifying that restartCount is present Apr 17 00:12:28.796: INFO: Initial restart count of pod liveness-aeaf2dd8-12f9-4f21-a078-e0e6b64f76a9 is 0 Apr 17 00:12:46.836: INFO: Restart count of pod container-probe-9976/liveness-aeaf2dd8-12f9-4f21-a078-e0e6b64f76a9 is now 1 (18.039454974s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:12:46.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9976" for this suite. • [SLOW TEST:22.157 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":119,"skipped":2147,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:12:46.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating replication controller my-hostname-basic-e84b066c-8e3d-46c8-9872-6a2a954956a4 Apr 17 00:12:47.017: INFO: Pod name my-hostname-basic-e84b066c-8e3d-46c8-9872-6a2a954956a4: Found 0 pods out of 1 Apr 17 00:12:52.021: INFO: Pod name my-hostname-basic-e84b066c-8e3d-46c8-9872-6a2a954956a4: Found 1 pods out of 1 Apr 17 00:12:52.021: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-e84b066c-8e3d-46c8-9872-6a2a954956a4" are running Apr 17 00:12:52.024: INFO: Pod "my-hostname-basic-e84b066c-8e3d-46c8-9872-6a2a954956a4-8dspb" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-17 00:12:47 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-17 00:12:50 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-17 00:12:50 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-17 00:12:47 +0000 UTC Reason: Message:}]) Apr 17 00:12:52.025: INFO: Trying to dial the pod Apr 17 00:12:57.036: INFO: Controller my-hostname-basic-e84b066c-8e3d-46c8-9872-6a2a954956a4: Got expected result from replica 1 [my-hostname-basic-e84b066c-8e3d-46c8-9872-6a2a954956a4-8dspb]: "my-hostname-basic-e84b066c-8e3d-46c8-9872-6a2a954956a4-8dspb", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:12:57.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2887" for this suite. • [SLOW TEST:10.151 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":120,"skipped":2163,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:12:57.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 00:12:57.124: INFO: The status of Pod test-webserver-7b3c396f-2bc0-48e4-bc14-3748ad04ac07 is Pending, waiting for it to be Running (with Ready = true) Apr 17 00:12:59.128: INFO: The status of Pod test-webserver-7b3c396f-2bc0-48e4-bc14-3748ad04ac07 is Pending, waiting for it to be Running (with Ready = true) Apr 17 00:13:01.128: INFO: The status of Pod test-webserver-7b3c396f-2bc0-48e4-bc14-3748ad04ac07 is Running (Ready = false) Apr 17 00:13:03.128: INFO: The status of Pod test-webserver-7b3c396f-2bc0-48e4-bc14-3748ad04ac07 is Running (Ready = false) Apr 17 00:13:05.128: INFO: The status of Pod test-webserver-7b3c396f-2bc0-48e4-bc14-3748ad04ac07 is Running (Ready = false) Apr 17 00:13:07.129: INFO: The status of Pod test-webserver-7b3c396f-2bc0-48e4-bc14-3748ad04ac07 is Running (Ready = false) Apr 17 00:13:09.129: INFO: The status of Pod test-webserver-7b3c396f-2bc0-48e4-bc14-3748ad04ac07 is Running (Ready = false) Apr 17 00:13:11.128: INFO: The status of Pod test-webserver-7b3c396f-2bc0-48e4-bc14-3748ad04ac07 is Running (Ready = false) Apr 17 00:13:13.129: INFO: The status of Pod test-webserver-7b3c396f-2bc0-48e4-bc14-3748ad04ac07 is Running (Ready = false) Apr 17 00:13:15.129: INFO: The status of Pod test-webserver-7b3c396f-2bc0-48e4-bc14-3748ad04ac07 is Running (Ready = false) Apr 17 00:13:17.129: INFO: The status of Pod test-webserver-7b3c396f-2bc0-48e4-bc14-3748ad04ac07 is Running (Ready = false) Apr 17 00:13:19.129: INFO: The status of Pod test-webserver-7b3c396f-2bc0-48e4-bc14-3748ad04ac07 is Running (Ready = false) Apr 17 00:13:21.128: INFO: The status of Pod test-webserver-7b3c396f-2bc0-48e4-bc14-3748ad04ac07 is Running (Ready = false) Apr 17 00:13:23.128: INFO: The status of Pod test-webserver-7b3c396f-2bc0-48e4-bc14-3748ad04ac07 is Running (Ready = true) Apr 17 00:13:23.130: INFO: Container started at 2020-04-17 00:12:59 +0000 UTC, pod became ready at 2020-04-17 00:13:22 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:13:23.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2818" for this suite. • [SLOW TEST:26.091 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":121,"skipped":2218,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:13:23.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-bb5009c5-f244-4816-8563-b2db1282aa6d STEP: Creating a pod to test consume configMaps Apr 17 00:13:23.209: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-db852001-f603-406c-a0f9-c992f033825e" in namespace "projected-1429" to be "Succeeded or Failed" Apr 17 00:13:23.228: INFO: Pod "pod-projected-configmaps-db852001-f603-406c-a0f9-c992f033825e": Phase="Pending", Reason="", readiness=false. Elapsed: 18.449594ms Apr 17 00:13:25.231: INFO: Pod "pod-projected-configmaps-db852001-f603-406c-a0f9-c992f033825e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022332004s Apr 17 00:13:27.235: INFO: Pod "pod-projected-configmaps-db852001-f603-406c-a0f9-c992f033825e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026273357s STEP: Saw pod success Apr 17 00:13:27.235: INFO: Pod "pod-projected-configmaps-db852001-f603-406c-a0f9-c992f033825e" satisfied condition "Succeeded or Failed" Apr 17 00:13:27.238: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-db852001-f603-406c-a0f9-c992f033825e container projected-configmap-volume-test: STEP: delete the pod Apr 17 00:13:27.297: INFO: Waiting for pod pod-projected-configmaps-db852001-f603-406c-a0f9-c992f033825e to disappear Apr 17 00:13:27.305: INFO: Pod pod-projected-configmaps-db852001-f603-406c-a0f9-c992f033825e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:13:27.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1429" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":122,"skipped":2239,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:13:27.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Apr 17 00:13:27.370: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-50 /api/v1/namespaces/watch-50/configmaps/e2e-watch-test-watch-closed 63af5291-c184-4a57-9521-2c4e40301020 8667278 0 2020-04-17 00:13:27 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 17 00:13:27.371: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-50 /api/v1/namespaces/watch-50/configmaps/e2e-watch-test-watch-closed 63af5291-c184-4a57-9521-2c4e40301020 8667279 0 2020-04-17 00:13:27 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 17 00:13:27.380: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-50 /api/v1/namespaces/watch-50/configmaps/e2e-watch-test-watch-closed 63af5291-c184-4a57-9521-2c4e40301020 8667280 0 2020-04-17 00:13:27 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 17 00:13:27.381: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-50 /api/v1/namespaces/watch-50/configmaps/e2e-watch-test-watch-closed 63af5291-c184-4a57-9521-2c4e40301020 8667281 0 2020-04-17 00:13:27 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:13:27.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-50" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":123,"skipped":2270,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:13:27.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-54a97622-c0ef-47c6-afeb-fa74d80492ca STEP: Creating a pod to test consume configMaps Apr 17 00:13:27.487: INFO: Waiting up to 5m0s for pod "pod-configmaps-167787ae-edc7-46fc-8234-d8f67eae4e43" in namespace "configmap-2768" to be "Succeeded or Failed" Apr 17 00:13:27.491: INFO: Pod "pod-configmaps-167787ae-edc7-46fc-8234-d8f67eae4e43": Phase="Pending", Reason="", readiness=false. Elapsed: 3.369599ms Apr 17 00:13:29.495: INFO: Pod "pod-configmaps-167787ae-edc7-46fc-8234-d8f67eae4e43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00733597s Apr 17 00:13:31.499: INFO: Pod "pod-configmaps-167787ae-edc7-46fc-8234-d8f67eae4e43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011815362s STEP: Saw pod success Apr 17 00:13:31.499: INFO: Pod "pod-configmaps-167787ae-edc7-46fc-8234-d8f67eae4e43" satisfied condition "Succeeded or Failed" Apr 17 00:13:31.502: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-167787ae-edc7-46fc-8234-d8f67eae4e43 container configmap-volume-test: STEP: delete the pod Apr 17 00:13:31.537: INFO: Waiting for pod pod-configmaps-167787ae-edc7-46fc-8234-d8f67eae4e43 to disappear Apr 17 00:13:31.551: INFO: Pod pod-configmaps-167787ae-edc7-46fc-8234-d8f67eae4e43 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:13:31.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2768" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":124,"skipped":2274,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:13:31.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Apr 17 00:13:31.659: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 17 00:13:36.666: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:13:36.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-880" for this suite. • [SLOW TEST:5.180 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":125,"skipped":2294,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:13:36.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 17 00:13:37.528: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 17 00:13:39.573: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722679217, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722679217, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722679217, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722679217, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 17 00:13:42.599: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 00:13:42.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3959-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:13:43.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3328" for this suite. STEP: Destroying namespace "webhook-3328-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.213 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":126,"skipped":2297,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:13:43.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 00:13:44.019: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 17 00:13:44.053: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 17 00:13:49.075: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 17 00:13:49.076: INFO: Creating deployment "test-rolling-update-deployment" Apr 17 00:13:49.079: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 17 00:13:49.088: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 17 00:13:51.094: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 17 00:13:51.096: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722679229, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722679229, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722679229, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722679229, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-664dd8fc7f\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 17 00:13:53.147: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 17 00:13:53.167: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-3865 /apis/apps/v1/namespaces/deployment-3865/deployments/test-rolling-update-deployment 1f1e137e-c2cc-427c-97ee-0f301742ab47 8667574 1 2020-04-17 00:13:49 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002ba90e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-17 00:13:49 +0000 UTC,LastTransitionTime:2020-04-17 00:13:49 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-664dd8fc7f" has successfully progressed.,LastUpdateTime:2020-04-17 00:13:52 +0000 UTC,LastTransitionTime:2020-04-17 00:13:49 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 17 00:13:53.169: INFO: New ReplicaSet "test-rolling-update-deployment-664dd8fc7f" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f deployment-3865 /apis/apps/v1/namespaces/deployment-3865/replicasets/test-rolling-update-deployment-664dd8fc7f 4164e289-3ae6-4b5a-9957-dd0c862a23ba 8667563 1 2020-04-17 00:13:49 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 1f1e137e-c2cc-427c-97ee-0f301742ab47 0xc002ba95f7 0xc002ba95f8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 664dd8fc7f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002ba9688 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 17 00:13:53.169: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 17 00:13:53.169: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-3865 /apis/apps/v1/namespaces/deployment-3865/replicasets/test-rolling-update-controller e19e9cd5-f842-4296-a36c-8b55e1a1376a 8667573 2 2020-04-17 00:13:44 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 1f1e137e-c2cc-427c-97ee-0f301742ab47 0xc002ba950f 0xc002ba9520}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002ba9588 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 17 00:13:53.171: INFO: Pod "test-rolling-update-deployment-664dd8fc7f-v56cc" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f-v56cc test-rolling-update-deployment-664dd8fc7f- deployment-3865 /api/v1/namespaces/deployment-3865/pods/test-rolling-update-deployment-664dd8fc7f-v56cc b7ae325e-7ecb-4f17-bcbf-971dbc72d4ae 8667562 0 2020-04-17 00:13:49 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-664dd8fc7f 4164e289-3ae6-4b5a-9957-dd0c862a23ba 0xc002ba9b57 0xc002ba9b58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h82xd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h82xd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h82xd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:13:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:13:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:13:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:13:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.242,StartTime:2020-04-17 00:13:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-17 00:13:51 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://d25bd1494ca9782447c2b59930976c86cd8ec560112ce42eb6d1d3136a9912ba,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.242,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:13:53.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3865" for this suite. • [SLOW TEST:9.225 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":127,"skipped":2319,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:13:53.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 17 00:13:53.230: INFO: Waiting up to 5m0s for pod "downwardapi-volume-33d0e528-600c-44a6-a6bd-eb46d973d5b2" in namespace "downward-api-937" to be "Succeeded or Failed" Apr 17 00:13:53.234: INFO: Pod "downwardapi-volume-33d0e528-600c-44a6-a6bd-eb46d973d5b2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.967332ms Apr 17 00:13:55.246: INFO: Pod "downwardapi-volume-33d0e528-600c-44a6-a6bd-eb46d973d5b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01602654s Apr 17 00:13:57.250: INFO: Pod "downwardapi-volume-33d0e528-600c-44a6-a6bd-eb46d973d5b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020019322s STEP: Saw pod success Apr 17 00:13:57.251: INFO: Pod "downwardapi-volume-33d0e528-600c-44a6-a6bd-eb46d973d5b2" satisfied condition "Succeeded or Failed" Apr 17 00:13:57.253: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-33d0e528-600c-44a6-a6bd-eb46d973d5b2 container client-container: STEP: delete the pod Apr 17 00:13:57.287: INFO: Waiting for pod downwardapi-volume-33d0e528-600c-44a6-a6bd-eb46d973d5b2 to disappear Apr 17 00:13:57.290: INFO: Pod downwardapi-volume-33d0e528-600c-44a6-a6bd-eb46d973d5b2 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:13:57.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-937" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":128,"skipped":2326,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:13:57.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:14:04.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9866" for this suite. • [SLOW TEST:7.133 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":129,"skipped":2331,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:14:04.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:14:04.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-3872" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":130,"skipped":2340,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:14:04.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-3039131c-7ba1-47cb-a007-deb56c0ce8bf STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:14:08.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4620" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":131,"skipped":2348,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:14:08.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Apr 17 00:14:08.863: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-492 /api/v1/namespaces/watch-492/configmaps/e2e-watch-test-resource-version f83a3b15-49ab-4b42-afa0-59b8ac40501e 8667711 0 2020-04-17 00:14:08 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 17 00:14:08.864: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-492 /api/v1/namespaces/watch-492/configmaps/e2e-watch-test-resource-version f83a3b15-49ab-4b42-afa0-59b8ac40501e 8667712 0 2020-04-17 00:14:08 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:14:08.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-492" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":132,"skipped":2362,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:14:08.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-projected-wtkj STEP: Creating a pod to test atomic-volume-subpath Apr 17 00:14:08.991: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-wtkj" in namespace "subpath-8923" to be "Succeeded or Failed" Apr 17 00:14:09.019: INFO: Pod "pod-subpath-test-projected-wtkj": Phase="Pending", Reason="", readiness=false. Elapsed: 27.85499ms Apr 17 00:14:11.023: INFO: Pod "pod-subpath-test-projected-wtkj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032264888s Apr 17 00:14:13.028: INFO: Pod "pod-subpath-test-projected-wtkj": Phase="Running", Reason="", readiness=true. Elapsed: 4.036393074s Apr 17 00:14:15.032: INFO: Pod "pod-subpath-test-projected-wtkj": Phase="Running", Reason="", readiness=true. Elapsed: 6.040463029s Apr 17 00:14:17.036: INFO: Pod "pod-subpath-test-projected-wtkj": Phase="Running", Reason="", readiness=true. Elapsed: 8.045082173s Apr 17 00:14:19.041: INFO: Pod "pod-subpath-test-projected-wtkj": Phase="Running", Reason="", readiness=true. Elapsed: 10.049297274s Apr 17 00:14:21.045: INFO: Pod "pod-subpath-test-projected-wtkj": Phase="Running", Reason="", readiness=true. Elapsed: 12.053335411s Apr 17 00:14:23.047: INFO: Pod "pod-subpath-test-projected-wtkj": Phase="Running", Reason="", readiness=true. Elapsed: 14.056279352s Apr 17 00:14:25.051: INFO: Pod "pod-subpath-test-projected-wtkj": Phase="Running", Reason="", readiness=true. Elapsed: 16.060243388s Apr 17 00:14:27.056: INFO: Pod "pod-subpath-test-projected-wtkj": Phase="Running", Reason="", readiness=true. Elapsed: 18.064719651s Apr 17 00:14:29.060: INFO: Pod "pod-subpath-test-projected-wtkj": Phase="Running", Reason="", readiness=true. Elapsed: 20.068919447s Apr 17 00:14:31.063: INFO: Pod "pod-subpath-test-projected-wtkj": Phase="Running", Reason="", readiness=true. Elapsed: 22.072245004s Apr 17 00:14:33.068: INFO: Pod "pod-subpath-test-projected-wtkj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.076558186s STEP: Saw pod success Apr 17 00:14:33.068: INFO: Pod "pod-subpath-test-projected-wtkj" satisfied condition "Succeeded or Failed" Apr 17 00:14:33.071: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-projected-wtkj container test-container-subpath-projected-wtkj: STEP: delete the pod Apr 17 00:14:33.105: INFO: Waiting for pod pod-subpath-test-projected-wtkj to disappear Apr 17 00:14:33.147: INFO: Pod pod-subpath-test-projected-wtkj no longer exists STEP: Deleting pod pod-subpath-test-projected-wtkj Apr 17 00:14:33.147: INFO: Deleting pod "pod-subpath-test-projected-wtkj" in namespace "subpath-8923" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:14:33.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8923" for this suite. • [SLOW TEST:24.282 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":133,"skipped":2385,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:14:33.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 00:14:33.195: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 17 00:14:36.099: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9188 create -f -' Apr 17 00:14:38.884: INFO: stderr: "" Apr 17 00:14:38.885: INFO: stdout: "e2e-test-crd-publish-openapi-9378-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 17 00:14:38.885: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9188 delete e2e-test-crd-publish-openapi-9378-crds test-cr' Apr 17 00:14:38.995: INFO: stderr: "" Apr 17 00:14:38.995: INFO: stdout: "e2e-test-crd-publish-openapi-9378-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Apr 17 00:14:38.995: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9188 apply -f -' Apr 17 00:14:39.259: INFO: stderr: "" Apr 17 00:14:39.259: INFO: stdout: "e2e-test-crd-publish-openapi-9378-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 17 00:14:39.259: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9188 delete e2e-test-crd-publish-openapi-9378-crds test-cr' Apr 17 00:14:39.366: INFO: stderr: "" Apr 17 00:14:39.366: INFO: stdout: "e2e-test-crd-publish-openapi-9378-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Apr 17 00:14:39.366: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9378-crds' Apr 17 00:14:39.597: INFO: stderr: "" Apr 17 00:14:39.597: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9378-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:14:41.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9188" for this suite. • [SLOW TEST:8.344 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":134,"skipped":2385,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:14:41.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2917.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-2917.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2917.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-2917.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2917.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2917.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-2917.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2917.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-2917.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2917.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 17 00:14:47.627: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:14:47.631: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:14:47.635: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:14:47.637: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:14:47.667: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:14:47.670: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:14:47.673: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:14:47.675: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:14:47.681: INFO: Lookups using dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2917.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2917.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local jessie_udp@dns-test-service-2.dns-2917.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2917.svc.cluster.local] Apr 17 00:14:52.686: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:14:52.696: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:14:52.703: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:14:52.706: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:14:52.715: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:14:52.718: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:14:52.721: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:14:52.725: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:14:52.739: INFO: Lookups using dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2917.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2917.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local jessie_udp@dns-test-service-2.dns-2917.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2917.svc.cluster.local] Apr 17 00:14:57.686: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:14:57.690: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:14:57.712: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:14:57.715: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:14:57.724: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:14:57.727: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:14:57.730: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:14:57.733: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:14:57.739: INFO: Lookups using dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2917.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2917.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local jessie_udp@dns-test-service-2.dns-2917.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2917.svc.cluster.local] Apr 17 00:15:02.686: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:15:02.690: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:15:02.694: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:15:02.697: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:15:02.707: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:15:02.710: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:15:02.713: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:15:02.716: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:15:02.726: INFO: Lookups using dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2917.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2917.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local jessie_udp@dns-test-service-2.dns-2917.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2917.svc.cluster.local] Apr 17 00:15:07.685: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:15:07.688: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:15:07.690: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:15:07.694: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:15:07.703: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:15:07.707: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:15:07.710: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:15:07.712: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:15:07.719: INFO: Lookups using dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2917.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2917.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local jessie_udp@dns-test-service-2.dns-2917.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2917.svc.cluster.local] Apr 17 00:15:12.701: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:15:12.706: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:15:13.173: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:15:13.183: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:15:13.192: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:15:13.195: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:15:13.198: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:15:13.201: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2917.svc.cluster.local from pod dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f: the server could not find the requested resource (get pods dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f) Apr 17 00:15:13.207: INFO: Lookups using dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2917.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2917.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2917.svc.cluster.local jessie_udp@dns-test-service-2.dns-2917.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2917.svc.cluster.local] Apr 17 00:15:17.726: INFO: DNS probes using dns-2917/dns-test-7873fe3b-9d11-4084-b043-7537b8219c2f succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:15:17.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2917" for this suite. • [SLOW TEST:36.821 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":135,"skipped":2415,"failed":0} S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:15:18.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:15:22.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-598" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":136,"skipped":2416,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:15:22.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 17 00:15:22.848: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 17 00:15:24.858: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722679322, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722679322, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722679322, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722679322, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 17 00:15:27.873: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Apr 17 00:15:27.894: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:15:27.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1377" for this suite. STEP: Destroying namespace "webhook-1377-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.489 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":137,"skipped":2429,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:15:28.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 17 00:15:28.096: INFO: Waiting up to 5m0s for pod "pod-ddcf19e7-e7e6-4a8f-b900-3d350f9383b8" in namespace "emptydir-3474" to be "Succeeded or Failed" Apr 17 00:15:28.113: INFO: Pod "pod-ddcf19e7-e7e6-4a8f-b900-3d350f9383b8": Phase="Pending", Reason="", readiness=false. Elapsed: 17.40492ms Apr 17 00:15:30.118: INFO: Pod "pod-ddcf19e7-e7e6-4a8f-b900-3d350f9383b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021730049s Apr 17 00:15:32.122: INFO: Pod "pod-ddcf19e7-e7e6-4a8f-b900-3d350f9383b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02596515s STEP: Saw pod success Apr 17 00:15:32.122: INFO: Pod "pod-ddcf19e7-e7e6-4a8f-b900-3d350f9383b8" satisfied condition "Succeeded or Failed" Apr 17 00:15:32.125: INFO: Trying to get logs from node latest-worker2 pod pod-ddcf19e7-e7e6-4a8f-b900-3d350f9383b8 container test-container: STEP: delete the pod Apr 17 00:15:32.150: INFO: Waiting for pod pod-ddcf19e7-e7e6-4a8f-b900-3d350f9383b8 to disappear Apr 17 00:15:32.153: INFO: Pod pod-ddcf19e7-e7e6-4a8f-b900-3d350f9383b8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:15:32.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3474" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":138,"skipped":2474,"failed":0} ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:15:32.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-684d487f-2acf-4d66-84c9-396fadfc3c72 STEP: Creating a pod to test consume secrets Apr 17 00:15:32.287: INFO: Waiting up to 5m0s for pod "pod-secrets-177ae8a7-028b-4975-8a82-8a4b1507c553" in namespace "secrets-5737" to be "Succeeded or Failed" Apr 17 00:15:32.306: INFO: Pod "pod-secrets-177ae8a7-028b-4975-8a82-8a4b1507c553": Phase="Pending", Reason="", readiness=false. Elapsed: 18.778564ms Apr 17 00:15:34.310: INFO: Pod "pod-secrets-177ae8a7-028b-4975-8a82-8a4b1507c553": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022600786s Apr 17 00:15:36.314: INFO: Pod "pod-secrets-177ae8a7-028b-4975-8a82-8a4b1507c553": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026794416s STEP: Saw pod success Apr 17 00:15:36.314: INFO: Pod "pod-secrets-177ae8a7-028b-4975-8a82-8a4b1507c553" satisfied condition "Succeeded or Failed" Apr 17 00:15:36.317: INFO: Trying to get logs from node latest-worker pod pod-secrets-177ae8a7-028b-4975-8a82-8a4b1507c553 container secret-volume-test: STEP: delete the pod Apr 17 00:15:36.353: INFO: Waiting for pod pod-secrets-177ae8a7-028b-4975-8a82-8a4b1507c553 to disappear Apr 17 00:15:36.388: INFO: Pod pod-secrets-177ae8a7-028b-4975-8a82-8a4b1507c553 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:15:36.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5737" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":139,"skipped":2474,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:15:36.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0417 00:15:37.539369 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 17 00:15:37.539: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:15:37.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8746" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":140,"skipped":2527,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:15:37.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:15:41.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2360" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":141,"skipped":2574,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:15:41.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 17 00:15:41.814: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6262637d-dbc4-48c7-a581-ac827a92011b" in namespace "downward-api-4697" to be "Succeeded or Failed" Apr 17 00:15:41.867: INFO: Pod "downwardapi-volume-6262637d-dbc4-48c7-a581-ac827a92011b": Phase="Pending", Reason="", readiness=false. Elapsed: 52.958709ms Apr 17 00:15:43.871: INFO: Pod "downwardapi-volume-6262637d-dbc4-48c7-a581-ac827a92011b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057265745s Apr 17 00:15:45.876: INFO: Pod "downwardapi-volume-6262637d-dbc4-48c7-a581-ac827a92011b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06159382s STEP: Saw pod success Apr 17 00:15:45.876: INFO: Pod "downwardapi-volume-6262637d-dbc4-48c7-a581-ac827a92011b" satisfied condition "Succeeded or Failed" Apr 17 00:15:45.879: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-6262637d-dbc4-48c7-a581-ac827a92011b container client-container: STEP: delete the pod Apr 17 00:15:45.969: INFO: Waiting for pod downwardapi-volume-6262637d-dbc4-48c7-a581-ac827a92011b to disappear Apr 17 00:15:45.974: INFO: Pod downwardapi-volume-6262637d-dbc4-48c7-a581-ac827a92011b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:15:45.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4697" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":142,"skipped":2584,"failed":0} SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:15:45.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 17 00:15:46.050: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 17 00:15:46.107: INFO: Waiting for terminating namespaces to be deleted... Apr 17 00:15:46.112: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 17 00:15:46.117: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 17 00:15:46.117: INFO: Container kindnet-cni ready: true, restart count 0 Apr 17 00:15:46.117: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 17 00:15:46.117: INFO: Container kube-proxy ready: true, restart count 0 Apr 17 00:15:46.117: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 17 00:15:46.121: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 17 00:15:46.121: INFO: Container kindnet-cni ready: true, restart count 0 Apr 17 00:15:46.121: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 17 00:15:46.121: INFO: Container kube-proxy ready: true, restart count 0 Apr 17 00:15:46.121: INFO: busybox-host-aliases281e669e-96c8-45bf-9479-208560fc5001 from kubelet-test-2360 started at 2020-04-17 00:15:37 +0000 UTC (1 container statuses recorded) Apr 17 00:15:46.121: INFO: Container busybox-host-aliases281e669e-96c8-45bf-9479-208560fc5001 ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1606732abfb76676], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.1606732ac15a8100], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:15:47.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3679" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":275,"completed":143,"skipped":2587,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:15:47.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 17 00:15:51.771: INFO: Successfully updated pod "labelsupdatec0c237da-63b0-44f6-985b-0983b25bc2b7" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:15:53.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9185" for this suite. • [SLOW TEST:6.645 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":144,"skipped":2592,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:15:53.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 17 00:15:53.906: INFO: Waiting up to 5m0s for pod "downward-api-5e481b56-3d52-4780-91cd-6135051dbea5" in namespace "downward-api-671" to be "Succeeded or Failed" Apr 17 00:15:53.926: INFO: Pod "downward-api-5e481b56-3d52-4780-91cd-6135051dbea5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.637859ms Apr 17 00:15:55.930: INFO: Pod "downward-api-5e481b56-3d52-4780-91cd-6135051dbea5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024853166s Apr 17 00:15:57.940: INFO: Pod "downward-api-5e481b56-3d52-4780-91cd-6135051dbea5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034147479s STEP: Saw pod success Apr 17 00:15:57.940: INFO: Pod "downward-api-5e481b56-3d52-4780-91cd-6135051dbea5" satisfied condition "Succeeded or Failed" Apr 17 00:15:57.944: INFO: Trying to get logs from node latest-worker pod downward-api-5e481b56-3d52-4780-91cd-6135051dbea5 container dapi-container: STEP: delete the pod Apr 17 00:15:57.962: INFO: Waiting for pod downward-api-5e481b56-3d52-4780-91cd-6135051dbea5 to disappear Apr 17 00:15:57.967: INFO: Pod downward-api-5e481b56-3d52-4780-91cd-6135051dbea5 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:15:57.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-671" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":145,"skipped":2620,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:15:57.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-5eb2e42b-da0a-4bcd-a64a-924eca1a403d in namespace container-probe-232 Apr 17 00:16:02.097: INFO: Started pod busybox-5eb2e42b-da0a-4bcd-a64a-924eca1a403d in namespace container-probe-232 STEP: checking the pod's current state and verifying that restartCount is present Apr 17 00:16:02.100: INFO: Initial restart count of pod busybox-5eb2e42b-da0a-4bcd-a64a-924eca1a403d is 0 Apr 17 00:16:54.252: INFO: Restart count of pod container-probe-232/busybox-5eb2e42b-da0a-4bcd-a64a-924eca1a403d is now 1 (52.151617237s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:16:54.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-232" for this suite. • [SLOW TEST:56.349 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":146,"skipped":2629,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:16:54.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-a554a71d-6bb3-4746-a759-4a0d9cc1d470 STEP: Creating a pod to test consume secrets Apr 17 00:16:54.384: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ed5972b1-3041-4ba2-9510-5d5bd28d768a" in namespace "projected-7211" to be "Succeeded or Failed" Apr 17 00:16:54.388: INFO: Pod "pod-projected-secrets-ed5972b1-3041-4ba2-9510-5d5bd28d768a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.718828ms Apr 17 00:16:56.392: INFO: Pod "pod-projected-secrets-ed5972b1-3041-4ba2-9510-5d5bd28d768a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007845728s Apr 17 00:16:58.396: INFO: Pod "pod-projected-secrets-ed5972b1-3041-4ba2-9510-5d5bd28d768a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012079071s STEP: Saw pod success Apr 17 00:16:58.396: INFO: Pod "pod-projected-secrets-ed5972b1-3041-4ba2-9510-5d5bd28d768a" satisfied condition "Succeeded or Failed" Apr 17 00:16:58.399: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-ed5972b1-3041-4ba2-9510-5d5bd28d768a container projected-secret-volume-test: STEP: delete the pod Apr 17 00:16:58.420: INFO: Waiting for pod pod-projected-secrets-ed5972b1-3041-4ba2-9510-5d5bd28d768a to disappear Apr 17 00:16:58.423: INFO: Pod pod-projected-secrets-ed5972b1-3041-4ba2-9510-5d5bd28d768a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:16:58.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7211" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":147,"skipped":2639,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:16:58.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-7740/configmap-test-d6717d54-e836-45d1-928c-9c6fb0a9d034 STEP: Creating a pod to test consume configMaps Apr 17 00:16:58.586: INFO: Waiting up to 5m0s for pod "pod-configmaps-0db2b6c6-9ab3-4ae3-b605-07b77b0cfdfa" in namespace "configmap-7740" to be "Succeeded or Failed" Apr 17 00:16:58.622: INFO: Pod "pod-configmaps-0db2b6c6-9ab3-4ae3-b605-07b77b0cfdfa": Phase="Pending", Reason="", readiness=false. Elapsed: 36.180605ms Apr 17 00:17:00.627: INFO: Pod "pod-configmaps-0db2b6c6-9ab3-4ae3-b605-07b77b0cfdfa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040226295s Apr 17 00:17:02.631: INFO: Pod "pod-configmaps-0db2b6c6-9ab3-4ae3-b605-07b77b0cfdfa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044574267s STEP: Saw pod success Apr 17 00:17:02.631: INFO: Pod "pod-configmaps-0db2b6c6-9ab3-4ae3-b605-07b77b0cfdfa" satisfied condition "Succeeded or Failed" Apr 17 00:17:02.634: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-0db2b6c6-9ab3-4ae3-b605-07b77b0cfdfa container env-test: STEP: delete the pod Apr 17 00:17:02.659: INFO: Waiting for pod pod-configmaps-0db2b6c6-9ab3-4ae3-b605-07b77b0cfdfa to disappear Apr 17 00:17:02.665: INFO: Pod pod-configmaps-0db2b6c6-9ab3-4ae3-b605-07b77b0cfdfa no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:17:02.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7740" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":148,"skipped":2686,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:17:02.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating pod Apr 17 00:17:06.740: INFO: Pod pod-hostip-1f96554f-dfcf-49c2-a51d-fca45380b905 has hostIP: 172.17.0.13 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:17:06.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9297" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":149,"skipped":2703,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:17:06.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-c1e8bac3-5411-4194-b5fd-a7c7284d9d1d STEP: Creating a pod to test consume secrets Apr 17 00:17:06.840: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-283bb8a8-3691-44ac-8995-d38fdd0b0c7e" in namespace "projected-8021" to be "Succeeded or Failed" Apr 17 00:17:06.845: INFO: Pod "pod-projected-secrets-283bb8a8-3691-44ac-8995-d38fdd0b0c7e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.412426ms Apr 17 00:17:08.910: INFO: Pod "pod-projected-secrets-283bb8a8-3691-44ac-8995-d38fdd0b0c7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06986995s Apr 17 00:17:10.914: INFO: Pod "pod-projected-secrets-283bb8a8-3691-44ac-8995-d38fdd0b0c7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073920789s STEP: Saw pod success Apr 17 00:17:10.914: INFO: Pod "pod-projected-secrets-283bb8a8-3691-44ac-8995-d38fdd0b0c7e" satisfied condition "Succeeded or Failed" Apr 17 00:17:10.917: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-283bb8a8-3691-44ac-8995-d38fdd0b0c7e container projected-secret-volume-test: STEP: delete the pod Apr 17 00:17:10.955: INFO: Waiting for pod pod-projected-secrets-283bb8a8-3691-44ac-8995-d38fdd0b0c7e to disappear Apr 17 00:17:10.969: INFO: Pod pod-projected-secrets-283bb8a8-3691-44ac-8995-d38fdd0b0c7e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:17:10.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8021" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":150,"skipped":2724,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:17:10.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:17:22.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-786" for this suite. • [SLOW TEST:11.149 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":151,"skipped":2753,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:17:22.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 17 00:17:22.875: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 17 00:17:26.060: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722679442, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722679442, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722679442, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722679442, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 17 00:17:29.109: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:17:39.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1949" for this suite. STEP: Destroying namespace "webhook-1949-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.199 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":152,"skipped":2805,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:17:39.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Apr 17 00:17:39.422: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3337 /api/v1/namespaces/watch-3337/configmaps/e2e-watch-test-label-changed 614ed327-4971-4d7a-b985-ebf2f33c21ac 8668962 0 2020-04-17 00:17:39 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 17 00:17:39.422: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3337 /api/v1/namespaces/watch-3337/configmaps/e2e-watch-test-label-changed 614ed327-4971-4d7a-b985-ebf2f33c21ac 8668963 0 2020-04-17 00:17:39 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 17 00:17:39.422: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3337 /api/v1/namespaces/watch-3337/configmaps/e2e-watch-test-label-changed 614ed327-4971-4d7a-b985-ebf2f33c21ac 8668964 0 2020-04-17 00:17:39 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Apr 17 00:17:49.463: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3337 /api/v1/namespaces/watch-3337/configmaps/e2e-watch-test-label-changed 614ed327-4971-4d7a-b985-ebf2f33c21ac 8669017 0 2020-04-17 00:17:39 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 17 00:17:49.463: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3337 /api/v1/namespaces/watch-3337/configmaps/e2e-watch-test-label-changed 614ed327-4971-4d7a-b985-ebf2f33c21ac 8669018 0 2020-04-17 00:17:39 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 17 00:17:49.463: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3337 /api/v1/namespaces/watch-3337/configmaps/e2e-watch-test-label-changed 614ed327-4971-4d7a-b985-ebf2f33c21ac 8669019 0 2020-04-17 00:17:39 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:17:49.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3337" for this suite. • [SLOW TEST:10.148 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":153,"skipped":2846,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:17:49.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 17 00:17:49.592: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 17 00:17:49.666: INFO: Waiting for terminating namespaces to be deleted... Apr 17 00:17:49.669: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 17 00:17:49.685: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 17 00:17:49.685: INFO: Container kindnet-cni ready: true, restart count 0 Apr 17 00:17:49.685: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 17 00:17:49.685: INFO: Container kube-proxy ready: true, restart count 0 Apr 17 00:17:49.685: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 17 00:17:49.692: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 17 00:17:49.692: INFO: Container kube-proxy ready: true, restart count 0 Apr 17 00:17:49.692: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 17 00:17:49.692: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-728b90f9-fe94-4a5b-8133-75ed29443e56 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-728b90f9-fe94-4a5b-8133-75ed29443e56 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-728b90f9-fe94-4a5b-8133-75ed29443e56 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:17:57.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8500" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:8.385 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":275,"completed":154,"skipped":2867,"failed":0} SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:17:57.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-33e74c55-ec7a-48aa-b1ab-842a47ccda9e STEP: Creating a pod to test consume secrets Apr 17 00:17:57.956: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ff34ce59-81e0-4391-9611-d578b96aefac" in namespace "projected-7529" to be "Succeeded or Failed" Apr 17 00:17:57.972: INFO: Pod "pod-projected-secrets-ff34ce59-81e0-4391-9611-d578b96aefac": Phase="Pending", Reason="", readiness=false. Elapsed: 16.316313ms Apr 17 00:17:59.977: INFO: Pod "pod-projected-secrets-ff34ce59-81e0-4391-9611-d578b96aefac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02096821s Apr 17 00:18:01.981: INFO: Pod "pod-projected-secrets-ff34ce59-81e0-4391-9611-d578b96aefac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025141223s STEP: Saw pod success Apr 17 00:18:01.981: INFO: Pod "pod-projected-secrets-ff34ce59-81e0-4391-9611-d578b96aefac" satisfied condition "Succeeded or Failed" Apr 17 00:18:01.984: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-ff34ce59-81e0-4391-9611-d578b96aefac container projected-secret-volume-test: STEP: delete the pod Apr 17 00:18:02.002: INFO: Waiting for pod pod-projected-secrets-ff34ce59-81e0-4391-9611-d578b96aefac to disappear Apr 17 00:18:02.006: INFO: Pod pod-projected-secrets-ff34ce59-81e0-4391-9611-d578b96aefac no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:18:02.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7529" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":155,"skipped":2872,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:18:02.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-4806 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-4806 I0417 00:18:02.151040 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-4806, replica count: 2 I0417 00:18:05.201545 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0417 00:18:08.201800 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 17 00:18:08.201: INFO: Creating new exec pod Apr 17 00:18:13.215: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-4806 execpod54dv9 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 17 00:18:13.449: INFO: stderr: "I0417 00:18:13.352177 1938 log.go:172] (0xc000a28000) (0xc00080b4a0) Create stream\nI0417 00:18:13.352244 1938 log.go:172] (0xc000a28000) (0xc00080b4a0) Stream added, broadcasting: 1\nI0417 00:18:13.354212 1938 log.go:172] (0xc000a28000) Reply frame received for 1\nI0417 00:18:13.354256 1938 log.go:172] (0xc000a28000) (0xc000b16000) Create stream\nI0417 00:18:13.354269 1938 log.go:172] (0xc000a28000) (0xc000b16000) Stream added, broadcasting: 3\nI0417 00:18:13.355330 1938 log.go:172] (0xc000a28000) Reply frame received for 3\nI0417 00:18:13.355361 1938 log.go:172] (0xc000a28000) (0xc000510000) Create stream\nI0417 00:18:13.355375 1938 log.go:172] (0xc000a28000) (0xc000510000) Stream added, broadcasting: 5\nI0417 00:18:13.356331 1938 log.go:172] (0xc000a28000) Reply frame received for 5\nI0417 00:18:13.442196 1938 log.go:172] (0xc000a28000) Data frame received for 5\nI0417 00:18:13.442247 1938 log.go:172] (0xc000510000) (5) Data frame handling\nI0417 00:18:13.442286 1938 log.go:172] (0xc000510000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0417 00:18:13.443121 1938 log.go:172] (0xc000a28000) Data frame received for 5\nI0417 00:18:13.443164 1938 log.go:172] (0xc000510000) (5) Data frame handling\nI0417 00:18:13.443214 1938 log.go:172] (0xc000510000) (5) Data frame sent\nI0417 00:18:13.443302 1938 log.go:172] (0xc000a28000) Data frame received for 5\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0417 00:18:13.443316 1938 log.go:172] (0xc000510000) (5) Data frame handling\nI0417 00:18:13.443808 1938 log.go:172] (0xc000a28000) Data frame received for 3\nI0417 00:18:13.443836 1938 log.go:172] (0xc000b16000) (3) Data frame handling\nI0417 00:18:13.445753 1938 log.go:172] (0xc000a28000) Data frame received for 1\nI0417 00:18:13.445764 1938 log.go:172] (0xc00080b4a0) (1) Data frame handling\nI0417 00:18:13.445773 1938 log.go:172] (0xc00080b4a0) (1) Data frame sent\nI0417 00:18:13.445998 1938 log.go:172] (0xc000a28000) (0xc00080b4a0) Stream removed, broadcasting: 1\nI0417 00:18:13.446041 1938 log.go:172] (0xc000a28000) Go away received\nI0417 00:18:13.446373 1938 log.go:172] (0xc000a28000) (0xc00080b4a0) Stream removed, broadcasting: 1\nI0417 00:18:13.446393 1938 log.go:172] (0xc000a28000) (0xc000b16000) Stream removed, broadcasting: 3\nI0417 00:18:13.446403 1938 log.go:172] (0xc000a28000) (0xc000510000) Stream removed, broadcasting: 5\n" Apr 17 00:18:13.449: INFO: stdout: "" Apr 17 00:18:13.450: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-4806 execpod54dv9 -- /bin/sh -x -c nc -zv -t -w 2 10.96.144.0 80' Apr 17 00:18:13.663: INFO: stderr: "I0417 00:18:13.578213 1959 log.go:172] (0xc00003b3f0) (0xc00090ebe0) Create stream\nI0417 00:18:13.578258 1959 log.go:172] (0xc00003b3f0) (0xc00090ebe0) Stream added, broadcasting: 1\nI0417 00:18:13.580462 1959 log.go:172] (0xc00003b3f0) Reply frame received for 1\nI0417 00:18:13.580514 1959 log.go:172] (0xc00003b3f0) (0xc0002b03c0) Create stream\nI0417 00:18:13.580534 1959 log.go:172] (0xc00003b3f0) (0xc0002b03c0) Stream added, broadcasting: 3\nI0417 00:18:13.581580 1959 log.go:172] (0xc00003b3f0) Reply frame received for 3\nI0417 00:18:13.581610 1959 log.go:172] (0xc00003b3f0) (0xc000512140) Create stream\nI0417 00:18:13.581620 1959 log.go:172] (0xc00003b3f0) (0xc000512140) Stream added, broadcasting: 5\nI0417 00:18:13.582490 1959 log.go:172] (0xc00003b3f0) Reply frame received for 5\nI0417 00:18:13.656296 1959 log.go:172] (0xc00003b3f0) Data frame received for 3\nI0417 00:18:13.656357 1959 log.go:172] (0xc00003b3f0) Data frame received for 5\nI0417 00:18:13.656414 1959 log.go:172] (0xc000512140) (5) Data frame handling\nI0417 00:18:13.656438 1959 log.go:172] (0xc000512140) (5) Data frame sent\nI0417 00:18:13.656459 1959 log.go:172] (0xc00003b3f0) Data frame received for 5\nI0417 00:18:13.656475 1959 log.go:172] (0xc000512140) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.144.0 80\nConnection to 10.96.144.0 80 port [tcp/http] succeeded!\nI0417 00:18:13.656495 1959 log.go:172] (0xc0002b03c0) (3) Data frame handling\nI0417 00:18:13.658203 1959 log.go:172] (0xc00003b3f0) Data frame received for 1\nI0417 00:18:13.658218 1959 log.go:172] (0xc00090ebe0) (1) Data frame handling\nI0417 00:18:13.658233 1959 log.go:172] (0xc00090ebe0) (1) Data frame sent\nI0417 00:18:13.658244 1959 log.go:172] (0xc00003b3f0) (0xc00090ebe0) Stream removed, broadcasting: 1\nI0417 00:18:13.658309 1959 log.go:172] (0xc00003b3f0) Go away received\nI0417 00:18:13.658517 1959 log.go:172] (0xc00003b3f0) (0xc00090ebe0) Stream removed, broadcasting: 1\nI0417 00:18:13.658533 1959 log.go:172] (0xc00003b3f0) (0xc0002b03c0) Stream removed, broadcasting: 3\nI0417 00:18:13.658540 1959 log.go:172] (0xc00003b3f0) (0xc000512140) Stream removed, broadcasting: 5\n" Apr 17 00:18:13.663: INFO: stdout: "" Apr 17 00:18:13.663: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:18:13.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4806" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:11.696 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":156,"skipped":2884,"failed":0} SS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:18:13.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 00:18:17.859: INFO: Waiting up to 5m0s for pod "client-envvars-b083c97d-7e8f-42d4-aa8e-3de5b60835fd" in namespace "pods-6801" to be "Succeeded or Failed" Apr 17 00:18:17.864: INFO: Pod "client-envvars-b083c97d-7e8f-42d4-aa8e-3de5b60835fd": Phase="Pending", Reason="", readiness=false. Elapsed: 5.510326ms Apr 17 00:18:19.959: INFO: Pod "client-envvars-b083c97d-7e8f-42d4-aa8e-3de5b60835fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100315649s Apr 17 00:18:21.963: INFO: Pod "client-envvars-b083c97d-7e8f-42d4-aa8e-3de5b60835fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.10394675s STEP: Saw pod success Apr 17 00:18:21.963: INFO: Pod "client-envvars-b083c97d-7e8f-42d4-aa8e-3de5b60835fd" satisfied condition "Succeeded or Failed" Apr 17 00:18:21.966: INFO: Trying to get logs from node latest-worker2 pod client-envvars-b083c97d-7e8f-42d4-aa8e-3de5b60835fd container env3cont: STEP: delete the pod Apr 17 00:18:21.995: INFO: Waiting for pod client-envvars-b083c97d-7e8f-42d4-aa8e-3de5b60835fd to disappear Apr 17 00:18:22.006: INFO: Pod client-envvars-b083c97d-7e8f-42d4-aa8e-3de5b60835fd no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:18:22.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6801" for this suite. • [SLOW TEST:8.306 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":157,"skipped":2886,"failed":0} [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:18:22.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 17 00:18:22.200: INFO: Waiting up to 5m0s for pod "downwardapi-volume-61d10d53-f828-4fd7-97b7-17d66d9f55b6" in namespace "projected-2908" to be "Succeeded or Failed" Apr 17 00:18:22.252: INFO: Pod "downwardapi-volume-61d10d53-f828-4fd7-97b7-17d66d9f55b6": Phase="Pending", Reason="", readiness=false. Elapsed: 52.872782ms Apr 17 00:18:24.256: INFO: Pod "downwardapi-volume-61d10d53-f828-4fd7-97b7-17d66d9f55b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056306469s Apr 17 00:18:26.259: INFO: Pod "downwardapi-volume-61d10d53-f828-4fd7-97b7-17d66d9f55b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059726343s STEP: Saw pod success Apr 17 00:18:26.259: INFO: Pod "downwardapi-volume-61d10d53-f828-4fd7-97b7-17d66d9f55b6" satisfied condition "Succeeded or Failed" Apr 17 00:18:26.262: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-61d10d53-f828-4fd7-97b7-17d66d9f55b6 container client-container: STEP: delete the pod Apr 17 00:18:26.304: INFO: Waiting for pod downwardapi-volume-61d10d53-f828-4fd7-97b7-17d66d9f55b6 to disappear Apr 17 00:18:26.330: INFO: Pod downwardapi-volume-61d10d53-f828-4fd7-97b7-17d66d9f55b6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:18:26.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2908" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":158,"skipped":2886,"failed":0} SSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:18:26.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 17 00:18:26.464: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:18:33.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4215" for this suite. • [SLOW TEST:7.446 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":159,"skipped":2894,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:18:33.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-9999 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-9999 STEP: Creating statefulset with conflicting port in namespace statefulset-9999 STEP: Waiting until pod test-pod will start running in namespace statefulset-9999 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9999 Apr 17 00:18:35.956: INFO: Observed stateful pod in namespace: statefulset-9999, name: ss-0, uid: 7cc4f809-5c00-4b7a-90b2-7aeec916d4d7, status phase: Pending. Waiting for statefulset controller to delete. Apr 17 00:18:42.965: INFO: Observed stateful pod in namespace: statefulset-9999, name: ss-0, uid: 7cc4f809-5c00-4b7a-90b2-7aeec916d4d7, status phase: Failed. Waiting for statefulset controller to delete. Apr 17 00:18:42.972: INFO: Observed stateful pod in namespace: statefulset-9999, name: ss-0, uid: 7cc4f809-5c00-4b7a-90b2-7aeec916d4d7, status phase: Failed. Waiting for statefulset controller to delete. Apr 17 00:18:42.981: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-9999 STEP: Removing pod with conflicting port in namespace statefulset-9999 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-9999 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 17 00:18:53.045: INFO: Deleting all statefulset in ns statefulset-9999 Apr 17 00:18:53.048: INFO: Scaling statefulset ss to 0 Apr 17 00:19:03.103: INFO: Waiting for statefulset status.replicas updated to 0 Apr 17 00:19:03.107: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:19:03.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9999" for this suite. • [SLOW TEST:29.320 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":160,"skipped":2916,"failed":0} [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:19:03.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 17 00:19:03.191: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-7066' Apr 17 00:19:03.301: INFO: stderr: "" Apr 17 00:19:03.301: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423 Apr 17 00:19:03.309: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7066' Apr 17 00:19:07.832: INFO: stderr: "" Apr 17 00:19:07.832: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:19:07.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7066" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":275,"completed":161,"skipped":2916,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:19:07.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 00:19:07.912: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-8068 I0417 00:19:07.926859 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8068, replica count: 1 I0417 00:19:08.977329 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0417 00:19:09.977587 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0417 00:19:10.977816 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 17 00:19:11.128: INFO: Created: latency-svc-5bqx9 Apr 17 00:19:11.133: INFO: Got endpoints: latency-svc-5bqx9 [55.478468ms] Apr 17 00:19:11.180: INFO: Created: latency-svc-54t9s Apr 17 00:19:11.193: INFO: Got endpoints: latency-svc-54t9s [60.296309ms] Apr 17 00:19:11.216: INFO: Created: latency-svc-jzwg9 Apr 17 00:19:11.248: INFO: Got endpoints: latency-svc-jzwg9 [115.318401ms] Apr 17 00:19:11.255: INFO: Created: latency-svc-hnmpm Apr 17 00:19:11.265: INFO: Got endpoints: latency-svc-hnmpm [130.832122ms] Apr 17 00:19:11.289: INFO: Created: latency-svc-md9qs Apr 17 00:19:11.301: INFO: Got endpoints: latency-svc-md9qs [167.689826ms] Apr 17 00:19:11.320: INFO: Created: latency-svc-wlknc Apr 17 00:19:11.337: INFO: Got endpoints: latency-svc-wlknc [203.627379ms] Apr 17 00:19:11.386: INFO: Created: latency-svc-zt4h2 Apr 17 00:19:11.408: INFO: Created: latency-svc-hzpzw Apr 17 00:19:11.408: INFO: Got endpoints: latency-svc-zt4h2 [274.177833ms] Apr 17 00:19:11.424: INFO: Got endpoints: latency-svc-hzpzw [290.513346ms] Apr 17 00:19:11.444: INFO: Created: latency-svc-mzmtk Apr 17 00:19:11.461: INFO: Got endpoints: latency-svc-mzmtk [327.261181ms] Apr 17 00:19:11.479: INFO: Created: latency-svc-w46mx Apr 17 00:19:11.535: INFO: Got endpoints: latency-svc-w46mx [400.997052ms] Apr 17 00:19:11.552: INFO: Created: latency-svc-qbghc Apr 17 00:19:11.562: INFO: Got endpoints: latency-svc-qbghc [427.974545ms] Apr 17 00:19:11.576: INFO: Created: latency-svc-h5rqd Apr 17 00:19:11.592: INFO: Got endpoints: latency-svc-h5rqd [457.94588ms] Apr 17 00:19:11.606: INFO: Created: latency-svc-x749s Apr 17 00:19:11.615: INFO: Got endpoints: latency-svc-x749s [481.658505ms] Apr 17 00:19:11.661: INFO: Created: latency-svc-ktpgk Apr 17 00:19:11.678: INFO: Got endpoints: latency-svc-ktpgk [544.047437ms] Apr 17 00:19:11.678: INFO: Created: latency-svc-rdjb2 Apr 17 00:19:11.690: INFO: Got endpoints: latency-svc-rdjb2 [556.836125ms] Apr 17 00:19:11.707: INFO: Created: latency-svc-9pz5m Apr 17 00:19:11.720: INFO: Got endpoints: latency-svc-9pz5m [587.239662ms] Apr 17 00:19:11.738: INFO: Created: latency-svc-rlmjg Apr 17 00:19:11.750: INFO: Got endpoints: latency-svc-rlmjg [556.75996ms] Apr 17 00:19:11.792: INFO: Created: latency-svc-mgsrd Apr 17 00:19:11.798: INFO: Got endpoints: latency-svc-mgsrd [549.43534ms] Apr 17 00:19:11.829: INFO: Created: latency-svc-fr244 Apr 17 00:19:11.852: INFO: Got endpoints: latency-svc-fr244 [587.809107ms] Apr 17 00:19:11.881: INFO: Created: latency-svc-b42q5 Apr 17 00:19:11.912: INFO: Got endpoints: latency-svc-b42q5 [610.585994ms] Apr 17 00:19:11.923: INFO: Created: latency-svc-wh98w Apr 17 00:19:11.936: INFO: Got endpoints: latency-svc-wh98w [599.091873ms] Apr 17 00:19:11.960: INFO: Created: latency-svc-lmwq5 Apr 17 00:19:12.014: INFO: Got endpoints: latency-svc-lmwq5 [605.988944ms] Apr 17 00:19:12.015: INFO: Created: latency-svc-rmfzm Apr 17 00:19:12.028: INFO: Got endpoints: latency-svc-rmfzm [604.457417ms] Apr 17 00:19:12.062: INFO: Created: latency-svc-5tjc2 Apr 17 00:19:12.077: INFO: Got endpoints: latency-svc-5tjc2 [616.747332ms] Apr 17 00:19:12.098: INFO: Created: latency-svc-p2brg Apr 17 00:19:12.113: INFO: Got endpoints: latency-svc-p2brg [578.283057ms] Apr 17 00:19:12.152: INFO: Created: latency-svc-nhddn Apr 17 00:19:12.154: INFO: Got endpoints: latency-svc-nhddn [592.721099ms] Apr 17 00:19:12.175: INFO: Created: latency-svc-lt9rl Apr 17 00:19:12.191: INFO: Got endpoints: latency-svc-lt9rl [599.151677ms] Apr 17 00:19:12.212: INFO: Created: latency-svc-qcj5d Apr 17 00:19:12.231: INFO: Got endpoints: latency-svc-qcj5d [615.489054ms] Apr 17 00:19:12.304: INFO: Created: latency-svc-2h7c5 Apr 17 00:19:12.362: INFO: Got endpoints: latency-svc-2h7c5 [684.078824ms] Apr 17 00:19:12.362: INFO: Created: latency-svc-nk9xz Apr 17 00:19:12.386: INFO: Got endpoints: latency-svc-nk9xz [695.699798ms] Apr 17 00:19:12.464: INFO: Created: latency-svc-tv5vm Apr 17 00:19:12.469: INFO: Got endpoints: latency-svc-tv5vm [748.826186ms] Apr 17 00:19:12.525: INFO: Created: latency-svc-m7hdh Apr 17 00:19:12.554: INFO: Got endpoints: latency-svc-m7hdh [803.371422ms] Apr 17 00:19:12.626: INFO: Created: latency-svc-7rw8p Apr 17 00:19:12.638: INFO: Got endpoints: latency-svc-7rw8p [839.416728ms] Apr 17 00:19:12.657: INFO: Created: latency-svc-cdjbl Apr 17 00:19:12.673: INFO: Got endpoints: latency-svc-cdjbl [820.461915ms] Apr 17 00:19:12.746: INFO: Created: latency-svc-pkbhv Apr 17 00:19:12.769: INFO: Got endpoints: latency-svc-pkbhv [857.383582ms] Apr 17 00:19:12.882: INFO: Created: latency-svc-vlqmc Apr 17 00:19:12.886: INFO: Got endpoints: latency-svc-vlqmc [949.209497ms] Apr 17 00:19:12.926: INFO: Created: latency-svc-kpsmj Apr 17 00:19:12.946: INFO: Got endpoints: latency-svc-kpsmj [932.472547ms] Apr 17 00:19:12.981: INFO: Created: latency-svc-542rq Apr 17 00:19:13.020: INFO: Got endpoints: latency-svc-542rq [991.335851ms] Apr 17 00:19:13.028: INFO: Created: latency-svc-g6cmr Apr 17 00:19:13.053: INFO: Got endpoints: latency-svc-g6cmr [975.829135ms] Apr 17 00:19:13.091: INFO: Created: latency-svc-cgnjn Apr 17 00:19:13.194: INFO: Got endpoints: latency-svc-cgnjn [1.080950033s] Apr 17 00:19:13.293: INFO: Created: latency-svc-6jrbd Apr 17 00:19:13.331: INFO: Got endpoints: latency-svc-6jrbd [1.176741307s] Apr 17 00:19:13.358: INFO: Created: latency-svc-cqxx9 Apr 17 00:19:13.371: INFO: Got endpoints: latency-svc-cqxx9 [1.180079267s] Apr 17 00:19:13.394: INFO: Created: latency-svc-9tgqk Apr 17 00:19:13.404: INFO: Got endpoints: latency-svc-9tgqk [1.173257385s] Apr 17 00:19:13.463: INFO: Created: latency-svc-dlcsp Apr 17 00:19:13.484: INFO: Got endpoints: latency-svc-dlcsp [1.12278649s] Apr 17 00:19:13.486: INFO: Created: latency-svc-c75mx Apr 17 00:19:13.500: INFO: Got endpoints: latency-svc-c75mx [1.113352759s] Apr 17 00:19:13.527: INFO: Created: latency-svc-p7csr Apr 17 00:19:13.542: INFO: Got endpoints: latency-svc-p7csr [1.072319373s] Apr 17 00:19:13.563: INFO: Created: latency-svc-2trd2 Apr 17 00:19:13.589: INFO: Got endpoints: latency-svc-2trd2 [1.035409846s] Apr 17 00:19:13.616: INFO: Created: latency-svc-6t97w Apr 17 00:19:13.644: INFO: Got endpoints: latency-svc-6t97w [1.005919023s] Apr 17 00:19:13.676: INFO: Created: latency-svc-nrfb6 Apr 17 00:19:13.726: INFO: Got endpoints: latency-svc-nrfb6 [1.053444409s] Apr 17 00:19:13.748: INFO: Created: latency-svc-qjrxh Apr 17 00:19:13.760: INFO: Got endpoints: latency-svc-qjrxh [990.65915ms] Apr 17 00:19:13.784: INFO: Created: latency-svc-6nfmh Apr 17 00:19:13.796: INFO: Got endpoints: latency-svc-6nfmh [910.598825ms] Apr 17 00:19:13.870: INFO: Created: latency-svc-4vbsd Apr 17 00:19:13.880: INFO: Got endpoints: latency-svc-4vbsd [933.862292ms] Apr 17 00:19:13.898: INFO: Created: latency-svc-5pwtv Apr 17 00:19:13.916: INFO: Got endpoints: latency-svc-5pwtv [895.997499ms] Apr 17 00:19:13.946: INFO: Created: latency-svc-zrf2b Apr 17 00:19:13.970: INFO: Got endpoints: latency-svc-zrf2b [917.067742ms] Apr 17 00:19:14.008: INFO: Created: latency-svc-zsjvx Apr 17 00:19:14.012: INFO: Got endpoints: latency-svc-zsjvx [817.579057ms] Apr 17 00:19:14.036: INFO: Created: latency-svc-ntcrz Apr 17 00:19:14.051: INFO: Got endpoints: latency-svc-ntcrz [719.538749ms] Apr 17 00:19:14.073: INFO: Created: latency-svc-wxvv4 Apr 17 00:19:14.087: INFO: Got endpoints: latency-svc-wxvv4 [715.667727ms] Apr 17 00:19:14.134: INFO: Created: latency-svc-nstq6 Apr 17 00:19:14.153: INFO: Got endpoints: latency-svc-nstq6 [748.733535ms] Apr 17 00:19:14.174: INFO: Created: latency-svc-tj2jt Apr 17 00:19:14.182: INFO: Got endpoints: latency-svc-tj2jt [697.617987ms] Apr 17 00:19:14.224: INFO: Created: latency-svc-84d8w Apr 17 00:19:14.231: INFO: Got endpoints: latency-svc-84d8w [730.985141ms] Apr 17 00:19:14.271: INFO: Created: latency-svc-dxq2n Apr 17 00:19:14.294: INFO: Got endpoints: latency-svc-dxq2n [752.588535ms] Apr 17 00:19:14.295: INFO: Created: latency-svc-ftkcp Apr 17 00:19:14.309: INFO: Got endpoints: latency-svc-ftkcp [719.520826ms] Apr 17 00:19:14.331: INFO: Created: latency-svc-c9rsg Apr 17 00:19:14.342: INFO: Got endpoints: latency-svc-c9rsg [698.103377ms] Apr 17 00:19:14.355: INFO: Created: latency-svc-mx2d7 Apr 17 00:19:14.409: INFO: Got endpoints: latency-svc-mx2d7 [682.675675ms] Apr 17 00:19:14.426: INFO: Created: latency-svc-lvd5s Apr 17 00:19:14.443: INFO: Got endpoints: latency-svc-lvd5s [683.060955ms] Apr 17 00:19:14.468: INFO: Created: latency-svc-z8mlm Apr 17 00:19:14.485: INFO: Got endpoints: latency-svc-z8mlm [689.036478ms] Apr 17 00:19:14.541: INFO: Created: latency-svc-ssgcc Apr 17 00:19:14.565: INFO: Got endpoints: latency-svc-ssgcc [684.672073ms] Apr 17 00:19:14.566: INFO: Created: latency-svc-8xm9p Apr 17 00:19:14.581: INFO: Got endpoints: latency-svc-8xm9p [665.219221ms] Apr 17 00:19:14.606: INFO: Created: latency-svc-wdvkd Apr 17 00:19:14.623: INFO: Got endpoints: latency-svc-wdvkd [652.92096ms] Apr 17 00:19:14.672: INFO: Created: latency-svc-wnlvv Apr 17 00:19:14.686: INFO: Got endpoints: latency-svc-wnlvv [674.272283ms] Apr 17 00:19:14.702: INFO: Created: latency-svc-7vlfr Apr 17 00:19:14.710: INFO: Got endpoints: latency-svc-7vlfr [658.844021ms] Apr 17 00:19:14.749: INFO: Created: latency-svc-5kkpm Apr 17 00:19:14.864: INFO: Got endpoints: latency-svc-5kkpm [777.41828ms] Apr 17 00:19:14.870: INFO: Created: latency-svc-jnxtx Apr 17 00:19:14.897: INFO: Got endpoints: latency-svc-jnxtx [744.217022ms] Apr 17 00:19:14.944: INFO: Created: latency-svc-d8zn6 Apr 17 00:19:14.955: INFO: Got endpoints: latency-svc-d8zn6 [772.786465ms] Apr 17 00:19:15.002: INFO: Created: latency-svc-84grd Apr 17 00:19:15.015: INFO: Got endpoints: latency-svc-84grd [784.768276ms] Apr 17 00:19:15.062: INFO: Created: latency-svc-d5msb Apr 17 00:19:15.086: INFO: Got endpoints: latency-svc-d5msb [791.155143ms] Apr 17 00:19:15.140: INFO: Created: latency-svc-mlpqj Apr 17 00:19:15.163: INFO: Created: latency-svc-2r5fx Apr 17 00:19:15.163: INFO: Got endpoints: latency-svc-mlpqj [854.529092ms] Apr 17 00:19:15.180: INFO: Got endpoints: latency-svc-2r5fx [837.891726ms] Apr 17 00:19:15.199: INFO: Created: latency-svc-nstrq Apr 17 00:19:15.216: INFO: Got endpoints: latency-svc-nstrq [806.884999ms] Apr 17 00:19:15.307: INFO: Created: latency-svc-vrbmx Apr 17 00:19:15.344: INFO: Got endpoints: latency-svc-vrbmx [900.70852ms] Apr 17 00:19:15.346: INFO: Created: latency-svc-td2vq Apr 17 00:19:15.353: INFO: Got endpoints: latency-svc-td2vq [867.964384ms] Apr 17 00:19:15.385: INFO: Created: latency-svc-484vw Apr 17 00:19:15.391: INFO: Got endpoints: latency-svc-484vw [825.741196ms] Apr 17 00:19:15.439: INFO: Created: latency-svc-t2j6b Apr 17 00:19:15.463: INFO: Created: latency-svc-l72lj Apr 17 00:19:15.463: INFO: Got endpoints: latency-svc-t2j6b [882.173905ms] Apr 17 00:19:15.478: INFO: Got endpoints: latency-svc-l72lj [854.474026ms] Apr 17 00:19:15.494: INFO: Created: latency-svc-8snnf Apr 17 00:19:15.507: INFO: Got endpoints: latency-svc-8snnf [820.591949ms] Apr 17 00:19:15.524: INFO: Created: latency-svc-p8x8w Apr 17 00:19:15.536: INFO: Got endpoints: latency-svc-p8x8w [826.465424ms] Apr 17 00:19:15.560: INFO: Created: latency-svc-644fr Apr 17 00:19:15.572: INFO: Got endpoints: latency-svc-644fr [708.207345ms] Apr 17 00:19:15.596: INFO: Created: latency-svc-4ntm5 Apr 17 00:19:15.609: INFO: Got endpoints: latency-svc-4ntm5 [711.831415ms] Apr 17 00:19:15.626: INFO: Created: latency-svc-6nh6p Apr 17 00:19:15.655: INFO: Got endpoints: latency-svc-6nh6p [699.88879ms] Apr 17 00:19:15.703: INFO: Created: latency-svc-fcrfl Apr 17 00:19:15.727: INFO: Got endpoints: latency-svc-fcrfl [711.32225ms] Apr 17 00:19:15.727: INFO: Created: latency-svc-x8g5s Apr 17 00:19:15.743: INFO: Got endpoints: latency-svc-x8g5s [657.546133ms] Apr 17 00:19:15.776: INFO: Created: latency-svc-qn4gd Apr 17 00:19:15.791: INFO: Got endpoints: latency-svc-qn4gd [627.385139ms] Apr 17 00:19:15.834: INFO: Created: latency-svc-fwwqq Apr 17 00:19:15.866: INFO: Got endpoints: latency-svc-fwwqq [685.915911ms] Apr 17 00:19:15.866: INFO: Created: latency-svc-89rdh Apr 17 00:19:15.881: INFO: Got endpoints: latency-svc-89rdh [665.076867ms] Apr 17 00:19:15.895: INFO: Created: latency-svc-b8nt8 Apr 17 00:19:15.905: INFO: Got endpoints: latency-svc-b8nt8 [561.341369ms] Apr 17 00:19:15.984: INFO: Created: latency-svc-xhb2r Apr 17 00:19:15.989: INFO: Got endpoints: latency-svc-xhb2r [635.638935ms] Apr 17 00:19:16.008: INFO: Created: latency-svc-5bk7b Apr 17 00:19:16.025: INFO: Got endpoints: latency-svc-5bk7b [634.279412ms] Apr 17 00:19:16.046: INFO: Created: latency-svc-6g9jc Apr 17 00:19:16.058: INFO: Got endpoints: latency-svc-6g9jc [594.628744ms] Apr 17 00:19:16.075: INFO: Created: latency-svc-2nmfb Apr 17 00:19:16.115: INFO: Got endpoints: latency-svc-2nmfb [637.477706ms] Apr 17 00:19:16.123: INFO: Created: latency-svc-rjdwh Apr 17 00:19:16.142: INFO: Got endpoints: latency-svc-rjdwh [635.281422ms] Apr 17 00:19:16.177: INFO: Created: latency-svc-z72cb Apr 17 00:19:16.202: INFO: Got endpoints: latency-svc-z72cb [665.940571ms] Apr 17 00:19:16.247: INFO: Created: latency-svc-npfjj Apr 17 00:19:16.322: INFO: Got endpoints: latency-svc-npfjj [749.610522ms] Apr 17 00:19:16.345: INFO: Created: latency-svc-glknn Apr 17 00:19:16.373: INFO: Got endpoints: latency-svc-glknn [763.900199ms] Apr 17 00:19:16.463: INFO: Created: latency-svc-g89zf Apr 17 00:19:16.511: INFO: Got endpoints: latency-svc-g89zf [855.621271ms] Apr 17 00:19:16.554: INFO: Created: latency-svc-q9mxj Apr 17 00:19:16.570: INFO: Got endpoints: latency-svc-q9mxj [843.401091ms] Apr 17 00:19:16.590: INFO: Created: latency-svc-vdtqd Apr 17 00:19:16.606: INFO: Got endpoints: latency-svc-vdtqd [862.800625ms] Apr 17 00:19:16.649: INFO: Created: latency-svc-xg8j7 Apr 17 00:19:16.669: INFO: Got endpoints: latency-svc-xg8j7 [878.232301ms] Apr 17 00:19:16.705: INFO: Created: latency-svc-h2cbm Apr 17 00:19:16.713: INFO: Got endpoints: latency-svc-h2cbm [847.706852ms] Apr 17 00:19:16.729: INFO: Created: latency-svc-bs6wk Apr 17 00:19:16.737: INFO: Got endpoints: latency-svc-bs6wk [856.109844ms] Apr 17 00:19:16.805: INFO: Created: latency-svc-sk2sk Apr 17 00:19:16.849: INFO: Got endpoints: latency-svc-sk2sk [943.555899ms] Apr 17 00:19:16.850: INFO: Created: latency-svc-t55v2 Apr 17 00:19:16.861: INFO: Got endpoints: latency-svc-t55v2 [871.613373ms] Apr 17 00:19:16.884: INFO: Created: latency-svc-x92qg Apr 17 00:19:16.897: INFO: Got endpoints: latency-svc-x92qg [871.378999ms] Apr 17 00:19:16.955: INFO: Created: latency-svc-jb27j Apr 17 00:19:16.958: INFO: Got endpoints: latency-svc-jb27j [899.756069ms] Apr 17 00:19:17.005: INFO: Created: latency-svc-cpb62 Apr 17 00:19:17.035: INFO: Got endpoints: latency-svc-cpb62 [919.602603ms] Apr 17 00:19:17.098: INFO: Created: latency-svc-w9jgv Apr 17 00:19:17.118: INFO: Got endpoints: latency-svc-w9jgv [975.661958ms] Apr 17 00:19:17.118: INFO: Created: latency-svc-ftg57 Apr 17 00:19:17.142: INFO: Got endpoints: latency-svc-ftg57 [939.759598ms] Apr 17 00:19:17.167: INFO: Created: latency-svc-6fkt4 Apr 17 00:19:17.184: INFO: Got endpoints: latency-svc-6fkt4 [862.411001ms] Apr 17 00:19:17.229: INFO: Created: latency-svc-mc5lz Apr 17 00:19:17.251: INFO: Got endpoints: latency-svc-mc5lz [877.67419ms] Apr 17 00:19:17.251: INFO: Created: latency-svc-qfj4c Apr 17 00:19:17.275: INFO: Got endpoints: latency-svc-qfj4c [763.782386ms] Apr 17 00:19:17.316: INFO: Created: latency-svc-kf2lf Apr 17 00:19:17.373: INFO: Got endpoints: latency-svc-kf2lf [802.648136ms] Apr 17 00:19:17.388: INFO: Created: latency-svc-nrxvx Apr 17 00:19:17.403: INFO: Got endpoints: latency-svc-nrxvx [796.642365ms] Apr 17 00:19:17.436: INFO: Created: latency-svc-ll2rc Apr 17 00:19:17.529: INFO: Got endpoints: latency-svc-ll2rc [859.743754ms] Apr 17 00:19:17.532: INFO: Created: latency-svc-mgql4 Apr 17 00:19:17.551: INFO: Created: latency-svc-xc6pg Apr 17 00:19:17.551: INFO: Got endpoints: latency-svc-mgql4 [836.968279ms] Apr 17 00:19:17.564: INFO: Got endpoints: latency-svc-xc6pg [826.819673ms] Apr 17 00:19:17.592: INFO: Created: latency-svc-s86fr Apr 17 00:19:17.610: INFO: Got endpoints: latency-svc-s86fr [760.805443ms] Apr 17 00:19:17.627: INFO: Created: latency-svc-2hlff Apr 17 00:19:17.654: INFO: Got endpoints: latency-svc-2hlff [793.534402ms] Apr 17 00:19:17.676: INFO: Created: latency-svc-pzw2t Apr 17 00:19:17.690: INFO: Got endpoints: latency-svc-pzw2t [793.064255ms] Apr 17 00:19:17.707: INFO: Created: latency-svc-8dn4z Apr 17 00:19:17.718: INFO: Got endpoints: latency-svc-8dn4z [759.686244ms] Apr 17 00:19:17.736: INFO: Created: latency-svc-vwq2n Apr 17 00:19:17.754: INFO: Got endpoints: latency-svc-vwq2n [718.582054ms] Apr 17 00:19:17.792: INFO: Created: latency-svc-lhw46 Apr 17 00:19:17.815: INFO: Got endpoints: latency-svc-lhw46 [696.838272ms] Apr 17 00:19:17.815: INFO: Created: latency-svc-lqj4s Apr 17 00:19:17.832: INFO: Got endpoints: latency-svc-lqj4s [689.543189ms] Apr 17 00:19:17.855: INFO: Created: latency-svc-qbl86 Apr 17 00:19:17.892: INFO: Got endpoints: latency-svc-qbl86 [707.251473ms] Apr 17 00:19:17.946: INFO: Created: latency-svc-kvz8z Apr 17 00:19:17.960: INFO: Got endpoints: latency-svc-kvz8z [709.104891ms] Apr 17 00:19:17.977: INFO: Created: latency-svc-8nt5l Apr 17 00:19:17.990: INFO: Got endpoints: latency-svc-8nt5l [715.03556ms] Apr 17 00:19:18.007: INFO: Created: latency-svc-t5wwf Apr 17 00:19:18.061: INFO: Got endpoints: latency-svc-t5wwf [688.53684ms] Apr 17 00:19:18.084: INFO: Created: latency-svc-5gf86 Apr 17 00:19:18.098: INFO: Got endpoints: latency-svc-5gf86 [695.028403ms] Apr 17 00:19:18.120: INFO: Created: latency-svc-5zv8w Apr 17 00:19:18.134: INFO: Got endpoints: latency-svc-5zv8w [604.668578ms] Apr 17 00:19:18.156: INFO: Created: latency-svc-pcdkr Apr 17 00:19:18.187: INFO: Got endpoints: latency-svc-pcdkr [636.901881ms] Apr 17 00:19:18.204: INFO: Created: latency-svc-wmdxd Apr 17 00:19:18.215: INFO: Got endpoints: latency-svc-wmdxd [650.919244ms] Apr 17 00:19:18.235: INFO: Created: latency-svc-xbzzh Apr 17 00:19:18.251: INFO: Got endpoints: latency-svc-xbzzh [640.833227ms] Apr 17 00:19:18.274: INFO: Created: latency-svc-v66lp Apr 17 00:19:18.319: INFO: Got endpoints: latency-svc-v66lp [664.650068ms] Apr 17 00:19:18.337: INFO: Created: latency-svc-wjfbv Apr 17 00:19:18.352: INFO: Got endpoints: latency-svc-wjfbv [662.708465ms] Apr 17 00:19:18.371: INFO: Created: latency-svc-d9ds9 Apr 17 00:19:18.382: INFO: Got endpoints: latency-svc-d9ds9 [664.748484ms] Apr 17 00:19:18.395: INFO: Created: latency-svc-llskq Apr 17 00:19:18.414: INFO: Got endpoints: latency-svc-llskq [659.844644ms] Apr 17 00:19:18.463: INFO: Created: latency-svc-m7pcr Apr 17 00:19:18.472: INFO: Got endpoints: latency-svc-m7pcr [657.800831ms] Apr 17 00:19:18.504: INFO: Created: latency-svc-94vlm Apr 17 00:19:18.517: INFO: Got endpoints: latency-svc-94vlm [684.985961ms] Apr 17 00:19:18.540: INFO: Created: latency-svc-gdnwx Apr 17 00:19:18.559: INFO: Got endpoints: latency-svc-gdnwx [667.485712ms] Apr 17 00:19:18.594: INFO: Created: latency-svc-6k9qm Apr 17 00:19:18.601: INFO: Got endpoints: latency-svc-6k9qm [641.004748ms] Apr 17 00:19:18.624: INFO: Created: latency-svc-z7cfb Apr 17 00:19:18.637: INFO: Got endpoints: latency-svc-z7cfb [647.068935ms] Apr 17 00:19:18.659: INFO: Created: latency-svc-8rlnl Apr 17 00:19:18.673: INFO: Got endpoints: latency-svc-8rlnl [611.088576ms] Apr 17 00:19:18.768: INFO: Created: latency-svc-2sqd5 Apr 17 00:19:18.815: INFO: Got endpoints: latency-svc-2sqd5 [717.435546ms] Apr 17 00:19:18.816: INFO: Created: latency-svc-n5npr Apr 17 00:19:18.828: INFO: Got endpoints: latency-svc-n5npr [694.750263ms] Apr 17 00:19:18.858: INFO: Created: latency-svc-sg67t Apr 17 00:19:18.918: INFO: Got endpoints: latency-svc-sg67t [730.236263ms] Apr 17 00:19:18.923: INFO: Created: latency-svc-5l9p4 Apr 17 00:19:18.936: INFO: Got endpoints: latency-svc-5l9p4 [720.568582ms] Apr 17 00:19:18.954: INFO: Created: latency-svc-rq7ls Apr 17 00:19:18.974: INFO: Got endpoints: latency-svc-rq7ls [723.0928ms] Apr 17 00:19:18.990: INFO: Created: latency-svc-8h62l Apr 17 00:19:19.006: INFO: Got endpoints: latency-svc-8h62l [686.620647ms] Apr 17 00:19:19.044: INFO: Created: latency-svc-s7s8d Apr 17 00:19:19.067: INFO: Got endpoints: latency-svc-s7s8d [714.657489ms] Apr 17 00:19:19.091: INFO: Created: latency-svc-g4hhn Apr 17 00:19:19.113: INFO: Got endpoints: latency-svc-g4hhn [731.104434ms] Apr 17 00:19:19.182: INFO: Created: latency-svc-q5ms7 Apr 17 00:19:19.212: INFO: Created: latency-svc-lqcfc Apr 17 00:19:19.212: INFO: Got endpoints: latency-svc-q5ms7 [798.22456ms] Apr 17 00:19:19.236: INFO: Got endpoints: latency-svc-lqcfc [763.269445ms] Apr 17 00:19:19.255: INFO: Created: latency-svc-n5sdl Apr 17 00:19:19.266: INFO: Got endpoints: latency-svc-n5sdl [749.060434ms] Apr 17 00:19:19.314: INFO: Created: latency-svc-s9x7r Apr 17 00:19:19.344: INFO: Created: latency-svc-7vkg6 Apr 17 00:19:19.344: INFO: Got endpoints: latency-svc-s9x7r [784.778374ms] Apr 17 00:19:19.385: INFO: Got endpoints: latency-svc-7vkg6 [784.447305ms] Apr 17 00:19:19.451: INFO: Created: latency-svc-bvdm4 Apr 17 00:19:19.478: INFO: Created: latency-svc-r5j5l Apr 17 00:19:19.478: INFO: Got endpoints: latency-svc-bvdm4 [840.938593ms] Apr 17 00:19:19.499: INFO: Got endpoints: latency-svc-r5j5l [826.384328ms] Apr 17 00:19:19.518: INFO: Created: latency-svc-cgnk2 Apr 17 00:19:19.548: INFO: Got endpoints: latency-svc-cgnk2 [732.533857ms] Apr 17 00:19:19.600: INFO: Created: latency-svc-l5vr9 Apr 17 00:19:19.619: INFO: Got endpoints: latency-svc-l5vr9 [790.572976ms] Apr 17 00:19:19.620: INFO: Created: latency-svc-x8dww Apr 17 00:19:19.629: INFO: Got endpoints: latency-svc-x8dww [710.794369ms] Apr 17 00:19:19.643: INFO: Created: latency-svc-s27g7 Apr 17 00:19:19.652: INFO: Got endpoints: latency-svc-s27g7 [716.553887ms] Apr 17 00:19:19.667: INFO: Created: latency-svc-gtjfj Apr 17 00:19:19.676: INFO: Got endpoints: latency-svc-gtjfj [702.365489ms] Apr 17 00:19:19.691: INFO: Created: latency-svc-lnvvw Apr 17 00:19:19.738: INFO: Got endpoints: latency-svc-lnvvw [732.399561ms] Apr 17 00:19:19.740: INFO: Created: latency-svc-z6kbb Apr 17 00:19:19.748: INFO: Got endpoints: latency-svc-z6kbb [680.864531ms] Apr 17 00:19:19.770: INFO: Created: latency-svc-f7m7z Apr 17 00:19:19.800: INFO: Got endpoints: latency-svc-f7m7z [686.088755ms] Apr 17 00:19:19.816: INFO: Created: latency-svc-jzxq9 Apr 17 00:19:19.835: INFO: Got endpoints: latency-svc-jzxq9 [622.995427ms] Apr 17 00:19:19.882: INFO: Created: latency-svc-sk9ql Apr 17 00:19:19.888: INFO: Got endpoints: latency-svc-sk9ql [652.636009ms] Apr 17 00:19:19.907: INFO: Created: latency-svc-lnxvp Apr 17 00:19:19.924: INFO: Got endpoints: latency-svc-lnxvp [658.573106ms] Apr 17 00:19:19.949: INFO: Created: latency-svc-tmhsz Apr 17 00:19:19.967: INFO: Got endpoints: latency-svc-tmhsz [622.542286ms] Apr 17 00:19:20.020: INFO: Created: latency-svc-dsg44 Apr 17 00:19:20.027: INFO: Got endpoints: latency-svc-dsg44 [641.195918ms] Apr 17 00:19:20.058: INFO: Created: latency-svc-ffgjd Apr 17 00:19:20.069: INFO: Got endpoints: latency-svc-ffgjd [590.826329ms] Apr 17 00:19:20.094: INFO: Created: latency-svc-qgj8z Apr 17 00:19:20.108: INFO: Got endpoints: latency-svc-qgj8z [608.750697ms] Apr 17 00:19:20.158: INFO: Created: latency-svc-zfdmh Apr 17 00:19:20.179: INFO: Created: latency-svc-lf6hl Apr 17 00:19:20.179: INFO: Got endpoints: latency-svc-zfdmh [631.252464ms] Apr 17 00:19:20.186: INFO: Got endpoints: latency-svc-lf6hl [566.599907ms] Apr 17 00:19:20.201: INFO: Created: latency-svc-gcqk9 Apr 17 00:19:20.215: INFO: Got endpoints: latency-svc-gcqk9 [586.362801ms] Apr 17 00:19:20.231: INFO: Created: latency-svc-c6474 Apr 17 00:19:20.241: INFO: Got endpoints: latency-svc-c6474 [588.272245ms] Apr 17 00:19:20.295: INFO: Created: latency-svc-btgtx Apr 17 00:19:20.321: INFO: Got endpoints: latency-svc-btgtx [645.123259ms] Apr 17 00:19:20.321: INFO: Created: latency-svc-lcz7k Apr 17 00:19:20.352: INFO: Got endpoints: latency-svc-lcz7k [613.267733ms] Apr 17 00:19:20.382: INFO: Created: latency-svc-n6l2d Apr 17 00:19:20.421: INFO: Got endpoints: latency-svc-n6l2d [672.829361ms] Apr 17 00:19:20.434: INFO: Created: latency-svc-9t5gc Apr 17 00:19:20.452: INFO: Got endpoints: latency-svc-9t5gc [652.491921ms] Apr 17 00:19:20.470: INFO: Created: latency-svc-rvnrt Apr 17 00:19:20.500: INFO: Got endpoints: latency-svc-rvnrt [665.499383ms] Apr 17 00:19:20.552: INFO: Created: latency-svc-w5wm2 Apr 17 00:19:20.560: INFO: Got endpoints: latency-svc-w5wm2 [671.399477ms] Apr 17 00:19:20.591: INFO: Created: latency-svc-5p5st Apr 17 00:19:20.614: INFO: Got endpoints: latency-svc-5p5st [689.602338ms] Apr 17 00:19:20.627: INFO: Created: latency-svc-26pbw Apr 17 00:19:20.638: INFO: Got endpoints: latency-svc-26pbw [671.317691ms] Apr 17 00:19:20.651: INFO: Created: latency-svc-9jflq Apr 17 00:19:20.686: INFO: Got endpoints: latency-svc-9jflq [659.225883ms] Apr 17 00:19:20.699: INFO: Created: latency-svc-c29pq Apr 17 00:19:20.719: INFO: Got endpoints: latency-svc-c29pq [650.401544ms] Apr 17 00:19:20.746: INFO: Created: latency-svc-8wg2b Apr 17 00:19:20.761: INFO: Got endpoints: latency-svc-8wg2b [652.805092ms] Apr 17 00:19:20.783: INFO: Created: latency-svc-hgkrn Apr 17 00:19:20.816: INFO: Got endpoints: latency-svc-hgkrn [636.476996ms] Apr 17 00:19:20.843: INFO: Created: latency-svc-rhqvh Apr 17 00:19:20.875: INFO: Got endpoints: latency-svc-rhqvh [689.34554ms] Apr 17 00:19:20.908: INFO: Created: latency-svc-dxnr4 Apr 17 00:19:20.936: INFO: Got endpoints: latency-svc-dxnr4 [720.711892ms] Apr 17 00:19:20.956: INFO: Created: latency-svc-nx4lp Apr 17 00:19:20.974: INFO: Got endpoints: latency-svc-nx4lp [733.148029ms] Apr 17 00:19:20.999: INFO: Created: latency-svc-bvdhl Apr 17 00:19:21.012: INFO: Got endpoints: latency-svc-bvdhl [691.031709ms] Apr 17 00:19:21.080: INFO: Created: latency-svc-8dl6t Apr 17 00:19:21.083: INFO: Got endpoints: latency-svc-8dl6t [730.98166ms] Apr 17 00:19:21.083: INFO: Latencies: [60.296309ms 115.318401ms 130.832122ms 167.689826ms 203.627379ms 274.177833ms 290.513346ms 327.261181ms 400.997052ms 427.974545ms 457.94588ms 481.658505ms 544.047437ms 549.43534ms 556.75996ms 556.836125ms 561.341369ms 566.599907ms 578.283057ms 586.362801ms 587.239662ms 587.809107ms 588.272245ms 590.826329ms 592.721099ms 594.628744ms 599.091873ms 599.151677ms 604.457417ms 604.668578ms 605.988944ms 608.750697ms 610.585994ms 611.088576ms 613.267733ms 615.489054ms 616.747332ms 622.542286ms 622.995427ms 627.385139ms 631.252464ms 634.279412ms 635.281422ms 635.638935ms 636.476996ms 636.901881ms 637.477706ms 640.833227ms 641.004748ms 641.195918ms 645.123259ms 647.068935ms 650.401544ms 650.919244ms 652.491921ms 652.636009ms 652.805092ms 652.92096ms 657.546133ms 657.800831ms 658.573106ms 658.844021ms 659.225883ms 659.844644ms 662.708465ms 664.650068ms 664.748484ms 665.076867ms 665.219221ms 665.499383ms 665.940571ms 667.485712ms 671.317691ms 671.399477ms 672.829361ms 674.272283ms 680.864531ms 682.675675ms 683.060955ms 684.078824ms 684.672073ms 684.985961ms 685.915911ms 686.088755ms 686.620647ms 688.53684ms 689.036478ms 689.34554ms 689.543189ms 689.602338ms 691.031709ms 694.750263ms 695.028403ms 695.699798ms 696.838272ms 697.617987ms 698.103377ms 699.88879ms 702.365489ms 707.251473ms 708.207345ms 709.104891ms 710.794369ms 711.32225ms 711.831415ms 714.657489ms 715.03556ms 715.667727ms 716.553887ms 717.435546ms 718.582054ms 719.520826ms 719.538749ms 720.568582ms 720.711892ms 723.0928ms 730.236263ms 730.98166ms 730.985141ms 731.104434ms 732.399561ms 732.533857ms 733.148029ms 744.217022ms 748.733535ms 748.826186ms 749.060434ms 749.610522ms 752.588535ms 759.686244ms 760.805443ms 763.269445ms 763.782386ms 763.900199ms 772.786465ms 777.41828ms 784.447305ms 784.768276ms 784.778374ms 790.572976ms 791.155143ms 793.064255ms 793.534402ms 796.642365ms 798.22456ms 802.648136ms 803.371422ms 806.884999ms 817.579057ms 820.461915ms 820.591949ms 825.741196ms 826.384328ms 826.465424ms 826.819673ms 836.968279ms 837.891726ms 839.416728ms 840.938593ms 843.401091ms 847.706852ms 854.474026ms 854.529092ms 855.621271ms 856.109844ms 857.383582ms 859.743754ms 862.411001ms 862.800625ms 867.964384ms 871.378999ms 871.613373ms 877.67419ms 878.232301ms 882.173905ms 895.997499ms 899.756069ms 900.70852ms 910.598825ms 917.067742ms 919.602603ms 932.472547ms 933.862292ms 939.759598ms 943.555899ms 949.209497ms 975.661958ms 975.829135ms 990.65915ms 991.335851ms 1.005919023s 1.035409846s 1.053444409s 1.072319373s 1.080950033s 1.113352759s 1.12278649s 1.173257385s 1.176741307s 1.180079267s] Apr 17 00:19:21.083: INFO: 50 %ile: 708.207345ms Apr 17 00:19:21.083: INFO: 90 %ile: 919.602603ms Apr 17 00:19:21.083: INFO: 99 %ile: 1.176741307s Apr 17 00:19:21.083: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:19:21.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-8068" for this suite. • [SLOW TEST:13.254 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":275,"completed":162,"skipped":2955,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:19:21.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 00:19:21.224: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 17 00:19:26.228: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 17 00:19:26.228: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 17 00:19:26.268: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-213 /apis/apps/v1/namespaces/deployment-213/deployments/test-cleanup-deployment 00e476d2-8c7a-4d10-8b61-7a29bdf2a441 8670329 1 2020-04-17 00:19:26 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002998398 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Apr 17 00:19:26.298: INFO: New ReplicaSet "test-cleanup-deployment-577c77b589" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-577c77b589 deployment-213 /apis/apps/v1/namespaces/deployment-213/replicasets/test-cleanup-deployment-577c77b589 b44d6a8b-1a20-4d9b-b536-4a8f577cfaf8 8670334 1 2020-04-17 00:19:26 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 00e476d2-8c7a-4d10-8b61-7a29bdf2a441 0xc002ab15d7 0xc002ab15d8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 577c77b589,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002ab1648 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 17 00:19:26.298: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Apr 17 00:19:26.298: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-213 /apis/apps/v1/namespaces/deployment-213/replicasets/test-cleanup-controller 262345eb-4d81-4774-ab58-e041c4146d91 8670331 1 2020-04-17 00:19:21 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 00e476d2-8c7a-4d10-8b61-7a29bdf2a441 0xc002ab14bf 0xc002ab1500}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002ab1568 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 17 00:19:26.336: INFO: Pod "test-cleanup-controller-8zd2q" is available: &Pod{ObjectMeta:{test-cleanup-controller-8zd2q test-cleanup-controller- deployment-213 /api/v1/namespaces/deployment-213/pods/test-cleanup-controller-8zd2q bbe837df-d8e8-438c-86bc-a089df5e917c 8670317 0 2020-04-17 00:19:21 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 262345eb-4d81-4774-ab58-e041c4146d91 0xc002ab1b07 0xc002ab1b08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n5ss6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n5ss6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n5ss6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:19:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:19:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:19:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:19:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.5,StartTime:2020-04-17 00:19:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-17 00:19:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://283b3b26bd696cacee957881a373636fbf2d3625ceaa51ca90967ed9c773bc2c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 17 00:19:26.336: INFO: Pod "test-cleanup-deployment-577c77b589-tbc27" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-577c77b589-tbc27 test-cleanup-deployment-577c77b589- deployment-213 /api/v1/namespaces/deployment-213/pods/test-cleanup-deployment-577c77b589-tbc27 f09f6adc-01e6-4364-9c8b-a418a317a3b9 8670340 0 2020-04-17 00:19:26 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-577c77b589 b44d6a8b-1a20-4d9b-b536-4a8f577cfaf8 0xc002ab1c97 0xc002ab1c98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n5ss6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n5ss6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n5ss6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:19:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:19:26.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-213" for this suite. • [SLOW TEST:5.319 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":163,"skipped":2968,"failed":0} SSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:19:26.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:19:26.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6869" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":164,"skipped":2975,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:19:26.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 17 00:19:27.140: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 17 00:19:29.258: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722679567, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722679567, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722679567, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722679567, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 17 00:19:31.277: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722679567, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722679567, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722679567, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722679567, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 17 00:19:34.357: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 00:19:34.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:19:35.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2943" for this suite. STEP: Destroying namespace "webhook-2943-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.506 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":165,"skipped":2980,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:19:36.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on node default medium Apr 17 00:19:36.561: INFO: Waiting up to 5m0s for pod "pod-c1653fce-2252-4076-8266-73c677f59b55" in namespace "emptydir-4647" to be "Succeeded or Failed" Apr 17 00:19:36.602: INFO: Pod "pod-c1653fce-2252-4076-8266-73c677f59b55": Phase="Pending", Reason="", readiness=false. Elapsed: 41.34255ms Apr 17 00:19:38.674: INFO: Pod "pod-c1653fce-2252-4076-8266-73c677f59b55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113464518s Apr 17 00:19:40.690: INFO: Pod "pod-c1653fce-2252-4076-8266-73c677f59b55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.129421139s STEP: Saw pod success Apr 17 00:19:40.690: INFO: Pod "pod-c1653fce-2252-4076-8266-73c677f59b55" satisfied condition "Succeeded or Failed" Apr 17 00:19:40.707: INFO: Trying to get logs from node latest-worker pod pod-c1653fce-2252-4076-8266-73c677f59b55 container test-container: STEP: delete the pod Apr 17 00:19:40.810: INFO: Waiting for pod pod-c1653fce-2252-4076-8266-73c677f59b55 to disappear Apr 17 00:19:40.830: INFO: Pod pod-c1653fce-2252-4076-8266-73c677f59b55 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:19:40.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4647" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":166,"skipped":3048,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:19:40.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Apr 17 00:19:41.082: INFO: namespace kubectl-5290 Apr 17 00:19:41.083: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5290' Apr 17 00:19:41.458: INFO: stderr: "" Apr 17 00:19:41.458: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 17 00:19:42.517: INFO: Selector matched 1 pods for map[app:agnhost] Apr 17 00:19:42.518: INFO: Found 0 / 1 Apr 17 00:19:43.643: INFO: Selector matched 1 pods for map[app:agnhost] Apr 17 00:19:43.643: INFO: Found 0 / 1 Apr 17 00:19:44.469: INFO: Selector matched 1 pods for map[app:agnhost] Apr 17 00:19:44.469: INFO: Found 0 / 1 Apr 17 00:19:45.471: INFO: Selector matched 1 pods for map[app:agnhost] Apr 17 00:19:45.471: INFO: Found 1 / 1 Apr 17 00:19:45.471: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 17 00:19:45.487: INFO: Selector matched 1 pods for map[app:agnhost] Apr 17 00:19:45.487: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 17 00:19:45.487: INFO: wait on agnhost-master startup in kubectl-5290 Apr 17 00:19:45.487: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs agnhost-master-k6jm2 agnhost-master --namespace=kubectl-5290' Apr 17 00:19:45.604: INFO: stderr: "" Apr 17 00:19:45.604: INFO: stdout: "Paused\n" STEP: exposing RC Apr 17 00:19:45.604: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-5290' Apr 17 00:19:45.779: INFO: stderr: "" Apr 17 00:19:45.779: INFO: stdout: "service/rm2 exposed\n" Apr 17 00:19:45.790: INFO: Service rm2 in namespace kubectl-5290 found. STEP: exposing service Apr 17 00:19:47.796: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-5290' Apr 17 00:19:48.052: INFO: stderr: "" Apr 17 00:19:48.052: INFO: stdout: "service/rm3 exposed\n" Apr 17 00:19:48.128: INFO: Service rm3 in namespace kubectl-5290 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:19:50.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5290" for this suite. • [SLOW TEST:9.248 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":275,"completed":167,"skipped":3050,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:19:50.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 17 00:19:50.212: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8ea817c0-ffed-4168-b122-7451bfbe22f0" in namespace "downward-api-5635" to be "Succeeded or Failed" Apr 17 00:19:50.230: INFO: Pod "downwardapi-volume-8ea817c0-ffed-4168-b122-7451bfbe22f0": Phase="Pending", Reason="", readiness=false. Elapsed: 18.011261ms Apr 17 00:19:52.252: INFO: Pod "downwardapi-volume-8ea817c0-ffed-4168-b122-7451bfbe22f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040293874s Apr 17 00:19:54.255: INFO: Pod "downwardapi-volume-8ea817c0-ffed-4168-b122-7451bfbe22f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043585759s STEP: Saw pod success Apr 17 00:19:54.255: INFO: Pod "downwardapi-volume-8ea817c0-ffed-4168-b122-7451bfbe22f0" satisfied condition "Succeeded or Failed" Apr 17 00:19:54.258: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-8ea817c0-ffed-4168-b122-7451bfbe22f0 container client-container: STEP: delete the pod Apr 17 00:19:54.304: INFO: Waiting for pod downwardapi-volume-8ea817c0-ffed-4168-b122-7451bfbe22f0 to disappear Apr 17 00:19:54.317: INFO: Pod downwardapi-volume-8ea817c0-ffed-4168-b122-7451bfbe22f0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:19:54.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5635" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":168,"skipped":3071,"failed":0} SS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:19:54.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Apr 17 00:20:00.920: INFO: Successfully updated pod "adopt-release-rccfk" STEP: Checking that the Job readopts the Pod Apr 17 00:20:00.920: INFO: Waiting up to 15m0s for pod "adopt-release-rccfk" in namespace "job-9107" to be "adopted" Apr 17 00:20:00.943: INFO: Pod "adopt-release-rccfk": Phase="Running", Reason="", readiness=true. Elapsed: 22.948355ms Apr 17 00:20:02.947: INFO: Pod "adopt-release-rccfk": Phase="Running", Reason="", readiness=true. Elapsed: 2.027103733s Apr 17 00:20:02.947: INFO: Pod "adopt-release-rccfk" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Apr 17 00:20:03.457: INFO: Successfully updated pod "adopt-release-rccfk" STEP: Checking that the Job releases the Pod Apr 17 00:20:03.457: INFO: Waiting up to 15m0s for pod "adopt-release-rccfk" in namespace "job-9107" to be "released" Apr 17 00:20:03.483: INFO: Pod "adopt-release-rccfk": Phase="Running", Reason="", readiness=true. Elapsed: 25.816489ms Apr 17 00:20:05.486: INFO: Pod "adopt-release-rccfk": Phase="Running", Reason="", readiness=true. Elapsed: 2.029407934s Apr 17 00:20:05.486: INFO: Pod "adopt-release-rccfk" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:20:05.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9107" for this suite. • [SLOW TEST:11.170 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":169,"skipped":3073,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:20:05.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Apr 17 00:20:05.623: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:20:19.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1852" for this suite. • [SLOW TEST:14.341 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":170,"skipped":3106,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:20:19.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:21:19.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-247" for this suite. • [SLOW TEST:60.097 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":171,"skipped":3112,"failed":0} [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:21:19.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 00:21:20.015: INFO: (0) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 6.342388ms) Apr 17 00:21:20.019: INFO: (1) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.674173ms) Apr 17 00:21:20.023: INFO: (2) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.459768ms) Apr 17 00:21:20.026: INFO: (3) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.407552ms) Apr 17 00:21:20.029: INFO: (4) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.124697ms) Apr 17 00:21:20.033: INFO: (5) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.685553ms) Apr 17 00:21:20.036: INFO: (6) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.280733ms) Apr 17 00:21:20.040: INFO: (7) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.491632ms) Apr 17 00:21:20.043: INFO: (8) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.448147ms) Apr 17 00:21:20.047: INFO: (9) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.518316ms) Apr 17 00:21:20.050: INFO: (10) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.448159ms) Apr 17 00:21:20.054: INFO: (11) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.274775ms) Apr 17 00:21:20.057: INFO: (12) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.840292ms) Apr 17 00:21:20.060: INFO: (13) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.986131ms) Apr 17 00:21:20.064: INFO: (14) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.415967ms) Apr 17 00:21:20.067: INFO: (15) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.306242ms) Apr 17 00:21:20.071: INFO: (16) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.452054ms) Apr 17 00:21:20.074: INFO: (17) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.66964ms) Apr 17 00:21:20.078: INFO: (18) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.670426ms) Apr 17 00:21:20.082: INFO: (19) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.392723ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:21:20.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-2632" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":275,"completed":172,"skipped":3112,"failed":0} SS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:21:20.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 00:21:20.168: INFO: Creating deployment "webserver-deployment" Apr 17 00:21:20.173: INFO: Waiting for observed generation 1 Apr 17 00:21:22.213: INFO: Waiting for all required pods to come up Apr 17 00:21:22.218: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Apr 17 00:21:30.579: INFO: Waiting for deployment "webserver-deployment" to complete Apr 17 00:21:30.585: INFO: Updating deployment "webserver-deployment" with a non-existent image Apr 17 00:21:30.591: INFO: Updating deployment webserver-deployment Apr 17 00:21:30.591: INFO: Waiting for observed generation 2 Apr 17 00:21:32.601: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 17 00:21:32.604: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 17 00:21:32.606: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 17 00:21:32.614: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 17 00:21:32.614: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 17 00:21:32.615: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 17 00:21:32.619: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Apr 17 00:21:32.619: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Apr 17 00:21:32.624: INFO: Updating deployment webserver-deployment Apr 17 00:21:32.624: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Apr 17 00:21:32.676: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 17 00:21:32.729: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 17 00:21:32.894: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-8471 /apis/apps/v1/namespaces/deployment-8471/deployments/webserver-deployment 656a129b-11a1-496b-88b6-a705a08b28bc 8671847 3 2020-04-17 00:21:20 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00406dbf8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-04-17 00:21:31 +0000 UTC,LastTransitionTime:2020-04-17 00:21:20 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-17 00:21:32 +0000 UTC,LastTransitionTime:2020-04-17 00:21:32 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Apr 17 00:21:33.001: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-8471 /apis/apps/v1/namespaces/deployment-8471/replicasets/webserver-deployment-c7997dcc8 ca6554bc-3983-4978-91f6-4f99d02672ab 8671908 3 2020-04-17 00:21:30 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 656a129b-11a1-496b-88b6-a705a08b28bc 0xc002dd2147 0xc002dd2148}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002dd21b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 17 00:21:33.001: INFO: All old ReplicaSets of Deployment "webserver-deployment": Apr 17 00:21:33.001: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-8471 /apis/apps/v1/namespaces/deployment-8471/replicasets/webserver-deployment-595b5b9587 2cf9ecce-7412-4703-9c5a-bbc8ecbffff7 8671897 3 2020-04-17 00:21:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 656a129b-11a1-496b-88b6-a705a08b28bc 0xc002dd2087 0xc002dd2088}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002dd20e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Apr 17 00:21:33.083: INFO: Pod "webserver-deployment-595b5b9587-2tx9k" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2tx9k webserver-deployment-595b5b9587- deployment-8471 /api/v1/namespaces/deployment-8471/pods/webserver-deployment-595b5b9587-2tx9k ba3210e9-9a86-4fb1-8e5c-5caccfa04f4e 8671881 0 2020-04-17 00:21:32 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2cf9ecce-7412-4703-9c5a-bbc8ecbffff7 0xc0032f8f87 0xc0032f8f88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jl4gt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jl4gt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jl4gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-17 00:21:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 17 00:21:33.083: INFO: Pod "webserver-deployment-595b5b9587-4ndp5" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4ndp5 webserver-deployment-595b5b9587- deployment-8471 /api/v1/namespaces/deployment-8471/pods/webserver-deployment-595b5b9587-4ndp5 c3bfb232-3aa9-44db-96dc-fa7a6cd70d0b 8671869 0 2020-04-17 00:21:32 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2cf9ecce-7412-4703-9c5a-bbc8ecbffff7 0xc0032f90e7 0xc0032f90e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jl4gt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jl4gt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jl4gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 17 00:21:33.083: INFO: Pod "webserver-deployment-595b5b9587-55tl7" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-55tl7 webserver-deployment-595b5b9587- deployment-8471 /api/v1/namespaces/deployment-8471/pods/webserver-deployment-595b5b9587-55tl7 b8d638fe-e9c9-4191-b33c-911012b7b83d 8671878 0 2020-04-17 00:21:32 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2cf9ecce-7412-4703-9c5a-bbc8ecbffff7 0xc0032f9207 0xc0032f9208}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jl4gt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jl4gt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jl4gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 17 00:21:33.083: INFO: Pod "webserver-deployment-595b5b9587-6s57v" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6s57v webserver-deployment-595b5b9587- deployment-8471 /api/v1/namespaces/deployment-8471/pods/webserver-deployment-595b5b9587-6s57v 69c182dd-88e5-4595-bc0b-6b8cfcdad472 8671722 0 2020-04-17 00:21:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2cf9ecce-7412-4703-9c5a-bbc8ecbffff7 0xc0032f9327 0xc0032f9328}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jl4gt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jl4gt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jl4gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.233,StartTime:2020-04-17 00:21:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-17 00:21:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://aab07c8694fb0a3ab25e110409688c3c6a68617134f85b839233890fc1742796,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.233,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 17 00:21:33.084: INFO: Pod "webserver-deployment-595b5b9587-8fkbg" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8fkbg webserver-deployment-595b5b9587- deployment-8471 /api/v1/namespaces/deployment-8471/pods/webserver-deployment-595b5b9587-8fkbg badaf1a0-7cd9-42f9-88d6-8870faa9d7ac 8671735 0 2020-04-17 00:21:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2cf9ecce-7412-4703-9c5a-bbc8ecbffff7 0xc0032f94a7 0xc0032f94a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jl4gt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jl4gt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jl4gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.234,StartTime:2020-04-17 00:21:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-17 00:21:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://93b53c31dcab9aa2b1d38855cfa4ebe2d75321443fe06fb1e1b858e4fb03f3cf,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.234,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 17 00:21:33.084: INFO: Pod "webserver-deployment-595b5b9587-c4fkr" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-c4fkr webserver-deployment-595b5b9587- deployment-8471 /api/v1/namespaces/deployment-8471/pods/webserver-deployment-595b5b9587-c4fkr 65858184-0a77-40d0-9fcc-c5a3a27ab037 8671864 0 2020-04-17 00:21:32 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2cf9ecce-7412-4703-9c5a-bbc8ecbffff7 0xc0032f9627 0xc0032f9628}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jl4gt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jl4gt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jl4gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 17 00:21:33.084: INFO: Pod "webserver-deployment-595b5b9587-fvx62" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fvx62 webserver-deployment-595b5b9587- deployment-8471 /api/v1/namespaces/deployment-8471/pods/webserver-deployment-595b5b9587-fvx62 b77496c9-9f26-4b05-b79b-3fa0f5f32905 8671697 0 2020-04-17 00:21:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2cf9ecce-7412-4703-9c5a-bbc8ecbffff7 0xc0032f9747 0xc0032f9748}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jl4gt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jl4gt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jl4gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.10,StartTime:2020-04-17 00:21:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-17 00:21:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://12ff8a47aef785f4643beff9aafd795fc719da439bf8bf5c44244195288192c5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.10,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 17 00:21:33.084: INFO: Pod "webserver-deployment-595b5b9587-gndkp" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-gndkp webserver-deployment-595b5b9587- deployment-8471 /api/v1/namespaces/deployment-8471/pods/webserver-deployment-595b5b9587-gndkp b492e9c1-ca0a-4935-ae3d-c71c0f8d0dbb 8671852 0 2020-04-17 00:21:32 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2cf9ecce-7412-4703-9c5a-bbc8ecbffff7 0xc0032f98c7 0xc0032f98c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jl4gt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jl4gt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jl4gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 17 00:21:33.084: INFO: Pod "webserver-deployment-595b5b9587-hfg42" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hfg42 webserver-deployment-595b5b9587- deployment-8471 /api/v1/namespaces/deployment-8471/pods/webserver-deployment-595b5b9587-hfg42 6d82341d-e06c-48bd-8f15-9267721d8658 8671861 0 2020-04-17 00:21:32 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2cf9ecce-7412-4703-9c5a-bbc8ecbffff7 0xc0032f99e7 0xc0032f99e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jl4gt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jl4gt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jl4gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 17 00:21:33.085: INFO: Pod "webserver-deployment-595b5b9587-hsrwc" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hsrwc webserver-deployment-595b5b9587- deployment-8471 /api/v1/namespaces/deployment-8471/pods/webserver-deployment-595b5b9587-hsrwc 67d9d161-1c55-416f-975c-a6719ef20a2d 8671726 0 2020-04-17 00:21:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2cf9ecce-7412-4703-9c5a-bbc8ecbffff7 0xc0032f9b07 0xc0032f9b08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jl4gt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jl4gt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jl4gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.12,StartTime:2020-04-17 00:21:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-17 00:21:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://de813da43a4a40ab0649815e66c393aad74e45e14a9f452280fbf08a9ea1567e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.12,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 17 00:21:33.085: INFO: Pod "webserver-deployment-595b5b9587-jg48s" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jg48s webserver-deployment-595b5b9587- deployment-8471 /api/v1/namespaces/deployment-8471/pods/webserver-deployment-595b5b9587-jg48s fd7c1778-5195-4ffb-9102-490e451284f6 8671752 0 2020-04-17 00:21:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2cf9ecce-7412-4703-9c5a-bbc8ecbffff7 0xc0032f9c87 0xc0032f9c88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jl4gt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jl4gt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jl4gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.235,StartTime:2020-04-17 00:21:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-17 00:21:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://46ccf4ff1348b170c6dccebd73ea5acedc613e5b71d19ab7ef6e933999126305,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.235,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 17 00:21:33.085: INFO: Pod "webserver-deployment-595b5b9587-kprhj" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-kprhj webserver-deployment-595b5b9587- deployment-8471 /api/v1/namespaces/deployment-8471/pods/webserver-deployment-595b5b9587-kprhj 084376b8-ef58-45b9-9aa6-ce4732a211d3 8671892 0 2020-04-17 00:21:32 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2cf9ecce-7412-4703-9c5a-bbc8ecbffff7 0xc0032f9e07 0xc0032f9e08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jl4gt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jl4gt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jl4gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 17 00:21:33.085: INFO: Pod "webserver-deployment-595b5b9587-lrq4d" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-lrq4d webserver-deployment-595b5b9587- deployment-8471 /api/v1/namespaces/deployment-8471/pods/webserver-deployment-595b5b9587-lrq4d 20993042-c421-4141-b100-d43491442b6d 8671751 0 2020-04-17 00:21:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2cf9ecce-7412-4703-9c5a-bbc8ecbffff7 0xc0032f9f37 0xc0032f9f38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jl4gt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jl4gt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jl4gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.14,StartTime:2020-04-17 00:21:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-17 00:21:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://090fe27d5ab4a0ff057b0d68f69a8d668002e3f1c0dff4bf55f9b0676c07d835,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.14,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 17 00:21:33.086: INFO: Pod "webserver-deployment-595b5b9587-mfvrb" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mfvrb webserver-deployment-595b5b9587- deployment-8471 /api/v1/namespaces/deployment-8471/pods/webserver-deployment-595b5b9587-mfvrb 55ecd055-5175-4b0b-b1eb-d11149cba51d 8671886 0 2020-04-17 00:21:32 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2cf9ecce-7412-4703-9c5a-bbc8ecbffff7 0xc002f6c0b7 0xc002f6c0b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jl4gt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jl4gt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jl4gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 17 00:21:33.086: INFO: Pod "webserver-deployment-595b5b9587-pgv9r" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-pgv9r webserver-deployment-595b5b9587- deployment-8471 /api/v1/namespaces/deployment-8471/pods/webserver-deployment-595b5b9587-pgv9r dc7337b3-1c7f-45f6-8479-0cf0803f2126 8671887 0 2020-04-17 00:21:32 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2cf9ecce-7412-4703-9c5a-bbc8ecbffff7 0xc002f6c1d7 0xc002f6c1d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jl4gt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jl4gt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jl4gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 17 00:21:33.086: INFO: Pod "webserver-deployment-595b5b9587-q7l6v" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-q7l6v webserver-deployment-595b5b9587- deployment-8471 /api/v1/namespaces/deployment-8471/pods/webserver-deployment-595b5b9587-q7l6v e8b5687d-7260-465d-acf1-aa3fdb284b55 8671721 0 2020-04-17 00:21:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2cf9ecce-7412-4703-9c5a-bbc8ecbffff7 0xc002f6c2f7 0xc002f6c2f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jl4gt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jl4gt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jl4gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.11,StartTime:2020-04-17 00:21:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-17 00:21:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://839b0fd1e6d0b7e93dc3ef69667388394c7ae7f6a5ba12572996b4a95b88e5b3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.11,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 17 00:21:33.086: INFO: Pod "webserver-deployment-595b5b9587-v9fdb" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-v9fdb webserver-deployment-595b5b9587- deployment-8471 /api/v1/namespaces/deployment-8471/pods/webserver-deployment-595b5b9587-v9fdb f935907f-5806-47cd-9b1a-dbb77693e0fc 8671916 0 2020-04-17 00:21:32 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2cf9ecce-7412-4703-9c5a-bbc8ecbffff7 0xc002f6c477 0xc002f6c478}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jl4gt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jl4gt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jl4gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-17 00:21:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 17 00:21:33.087: INFO: Pod "webserver-deployment-595b5b9587-vl7sg" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vl7sg webserver-deployment-595b5b9587- deployment-8471 /api/v1/namespaces/deployment-8471/pods/webserver-deployment-595b5b9587-vl7sg 0e5e979c-5e5d-42c5-ae4c-5e272202aba7 8671912 0 2020-04-17 00:21:32 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2cf9ecce-7412-4703-9c5a-bbc8ecbffff7 0xc002f6c5d7 0xc002f6c5d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jl4gt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jl4gt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jl4gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-17 00:21:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 17 00:21:33.087: INFO: Pod "webserver-deployment-595b5b9587-w9bvf" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-w9bvf webserver-deployment-595b5b9587- deployment-8471 /api/v1/namespaces/deployment-8471/pods/webserver-deployment-595b5b9587-w9bvf 06bb3a87-8374-4c75-bae3-c61bd35d5b33 8671888 0 2020-04-17 00:21:32 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2cf9ecce-7412-4703-9c5a-bbc8ecbffff7 0xc002f6c737 0xc002f6c738}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jl4gt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jl4gt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jl4gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 17 00:21:33.087: INFO: Pod "webserver-deployment-595b5b9587-wwl6z" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wwl6z webserver-deployment-595b5b9587- deployment-8471 /api/v1/namespaces/deployment-8471/pods/webserver-deployment-595b5b9587-wwl6z 2c8ba88e-d74b-41ba-a77e-13faeadf7690 8671757 0 2020-04-17 00:21:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2cf9ecce-7412-4703-9c5a-bbc8ecbffff7 0xc002f6c857 0xc002f6c858}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jl4gt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jl4gt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jl4gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.13,StartTime:2020-04-17 00:21:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-17 00:21:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e5b3b94400ae40fe10411ac5c0e343f23cda60cfd1b3bc239fc9a14b21bb4257,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.13,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 17 00:21:33.087: INFO: Pod "webserver-deployment-c7997dcc8-4xxfl" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4xxfl webserver-deployment-c7997dcc8- deployment-8471 /api/v1/namespaces/deployment-8471/pods/webserver-deployment-c7997dcc8-4xxfl 8c47ed15-557e-4cb2-b549-54f72fe0c00c 8671820 0 2020-04-17 00:21:30 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ca6554bc-3983-4978-91f6-4f99d02672ab 0xc002f6c9d7 0xc002f6c9d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jl4gt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jl4gt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jl4gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-17 00:21:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 17 00:21:33.088: INFO: Pod "webserver-deployment-c7997dcc8-6lg2n" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6lg2n webserver-deployment-c7997dcc8- deployment-8471 /api/v1/namespaces/deployment-8471/pods/webserver-deployment-c7997dcc8-6lg2n f06651aa-49e3-4e7b-8575-05da24ff9701 8671911 0 2020-04-17 00:21:32 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ca6554bc-3983-4978-91f6-4f99d02672ab 0xc002f6cb57 0xc002f6cb58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jl4gt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jl4gt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jl4gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-17 00:21:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 17 00:21:33.088: INFO: Pod "webserver-deployment-c7997dcc8-82lsp" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-82lsp webserver-deployment-c7997dcc8- deployment-8471 /api/v1/namespaces/deployment-8471/pods/webserver-deployment-c7997dcc8-82lsp a6c4db1f-99a6-440c-a28d-b77d1dde5ad4 8671794 0 2020-04-17 00:21:30 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ca6554bc-3983-4978-91f6-4f99d02672ab 0xc002f6cce7 0xc002f6cce8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jl4gt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jl4gt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jl4gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-17 00:21:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 17 00:21:33.088: INFO: Pod "webserver-deployment-c7997dcc8-dlhxr" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dlhxr webserver-deployment-c7997dcc8- deployment-8471 /api/v1/namespaces/deployment-8471/pods/webserver-deployment-c7997dcc8-dlhxr 7e690c19-ce85-4699-8511-8db1438bee9e 8671815 0 2020-04-17 00:21:30 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ca6554bc-3983-4978-91f6-4f99d02672ab 0xc002f6ce67 0xc002f6ce68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jl4gt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jl4gt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jl4gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-17 00:21:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 17 00:21:33.088: INFO: Pod "webserver-deployment-c7997dcc8-fl98p" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fl98p webserver-deployment-c7997dcc8- deployment-8471 /api/v1/namespaces/deployment-8471/pods/webserver-deployment-c7997dcc8-fl98p e22babb3-f52c-4110-929c-24b1bac05e3d 8671866 0 2020-04-17 00:21:32 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ca6554bc-3983-4978-91f6-4f99d02672ab 0xc002f6cfe7 0xc002f6cfe8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jl4gt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jl4gt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jl4gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 17 00:21:33.088: INFO: Pod "webserver-deployment-c7997dcc8-h64xf" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-h64xf webserver-deployment-c7997dcc8- deployment-8471 /api/v1/namespaces/deployment-8471/pods/webserver-deployment-c7997dcc8-h64xf 4a1456c3-7aee-4cc2-8722-9994a929200a 8671891 0 2020-04-17 00:21:32 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ca6554bc-3983-4978-91f6-4f99d02672ab 0xc002f6d117 0xc002f6d118}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jl4gt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jl4gt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jl4gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 17 00:21:33.089: INFO: Pod "webserver-deployment-c7997dcc8-j9n9n" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-j9n9n webserver-deployment-c7997dcc8- deployment-8471 /api/v1/namespaces/deployment-8471/pods/webserver-deployment-c7997dcc8-j9n9n 585fdee9-0ec3-4298-a167-7274c0430daf 8671795 0 2020-04-17 00:21:30 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ca6554bc-3983-4978-91f6-4f99d02672ab 0xc002f6d257 0xc002f6d258}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jl4gt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jl4gt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jl4gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-17 00:21:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 17 00:21:33.089: INFO: Pod "webserver-deployment-c7997dcc8-jw84j" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jw84j webserver-deployment-c7997dcc8- deployment-8471 /api/v1/namespaces/deployment-8471/pods/webserver-deployment-c7997dcc8-jw84j 28001385-9e5d-4305-a403-978ae953e402 8671823 0 2020-04-17 00:21:30 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ca6554bc-3983-4978-91f6-4f99d02672ab 0xc002f6d3d7 0xc002f6d3d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jl4gt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jl4gt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jl4gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-17 00:21:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 17 00:21:33.089: INFO: Pod "webserver-deployment-c7997dcc8-kvrww" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-kvrww webserver-deployment-c7997dcc8- deployment-8471 /api/v1/namespaces/deployment-8471/pods/webserver-deployment-c7997dcc8-kvrww 25493710-a221-4174-a833-56ba9c9b9842 8671906 0 2020-04-17 00:21:32 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ca6554bc-3983-4978-91f6-4f99d02672ab 0xc002f6d557 0xc002f6d558}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jl4gt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jl4gt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jl4gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 17 00:21:33.089: INFO: Pod "webserver-deployment-c7997dcc8-lvszm" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-lvszm webserver-deployment-c7997dcc8- deployment-8471 /api/v1/namespaces/deployment-8471/pods/webserver-deployment-c7997dcc8-lvszm c33b5c89-e18e-4e76-a375-74bfa89f5e3e 8671868 0 2020-04-17 00:21:32 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ca6554bc-3983-4978-91f6-4f99d02672ab 0xc002f6d6a7 0xc002f6d6a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jl4gt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jl4gt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jl4gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 17 00:21:33.089: INFO: Pod "webserver-deployment-c7997dcc8-r94pn" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-r94pn webserver-deployment-c7997dcc8- deployment-8471 /api/v1/namespaces/deployment-8471/pods/webserver-deployment-c7997dcc8-r94pn 48d7f3b3-847b-4a08-aeb7-390e50acd28d 8671890 0 2020-04-17 00:21:32 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ca6554bc-3983-4978-91f6-4f99d02672ab 0xc002f6d7f7 0xc002f6d7f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jl4gt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jl4gt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jl4gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 17 00:21:33.090: INFO: Pod "webserver-deployment-c7997dcc8-wfn4c" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wfn4c webserver-deployment-c7997dcc8- deployment-8471 /api/v1/namespaces/deployment-8471/pods/webserver-deployment-c7997dcc8-wfn4c 7ce2524a-6d78-4029-8408-ba175d8f7c3c 8671876 0 2020-04-17 00:21:32 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ca6554bc-3983-4978-91f6-4f99d02672ab 0xc002f6da67 0xc002f6da68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jl4gt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jl4gt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jl4gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 17 00:21:33.090: INFO: Pod "webserver-deployment-c7997dcc8-xddtv" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xddtv webserver-deployment-c7997dcc8- deployment-8471 /api/v1/namespaces/deployment-8471/pods/webserver-deployment-c7997dcc8-xddtv 9525bde0-bee2-4700-ae11-b136c65d5f0c 8671889 0 2020-04-17 00:21:32 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ca6554bc-3983-4978-91f6-4f99d02672ab 0xc002f6dbe7 0xc002f6dbe8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jl4gt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jl4gt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jl4gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:21:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:21:33.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8471" for this suite. • [SLOW TEST:13.213 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":173,"skipped":3114,"failed":0} [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:21:33.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 17 00:21:33.763: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:21:57.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6555" for this suite. • [SLOW TEST:23.736 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":174,"skipped":3114,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:21:57.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 17 00:21:57.095: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8951 /api/v1/namespaces/watch-8951/configmaps/e2e-watch-test-configmap-a ffaebec7-cd65-4e98-9e6f-f9c8e2aec0ef 8672218 0 2020-04-17 00:21:57 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 17 00:21:57.095: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8951 /api/v1/namespaces/watch-8951/configmaps/e2e-watch-test-configmap-a ffaebec7-cd65-4e98-9e6f-f9c8e2aec0ef 8672218 0 2020-04-17 00:21:57 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Apr 17 00:22:07.109: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8951 /api/v1/namespaces/watch-8951/configmaps/e2e-watch-test-configmap-a ffaebec7-cd65-4e98-9e6f-f9c8e2aec0ef 8672264 0 2020-04-17 00:21:57 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 17 00:22:07.109: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8951 /api/v1/namespaces/watch-8951/configmaps/e2e-watch-test-configmap-a ffaebec7-cd65-4e98-9e6f-f9c8e2aec0ef 8672264 0 2020-04-17 00:21:57 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Apr 17 00:22:17.116: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8951 /api/v1/namespaces/watch-8951/configmaps/e2e-watch-test-configmap-a ffaebec7-cd65-4e98-9e6f-f9c8e2aec0ef 8672294 0 2020-04-17 00:21:57 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 17 00:22:17.116: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8951 /api/v1/namespaces/watch-8951/configmaps/e2e-watch-test-configmap-a ffaebec7-cd65-4e98-9e6f-f9c8e2aec0ef 8672294 0 2020-04-17 00:21:57 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Apr 17 00:22:27.124: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8951 /api/v1/namespaces/watch-8951/configmaps/e2e-watch-test-configmap-a ffaebec7-cd65-4e98-9e6f-f9c8e2aec0ef 8672326 0 2020-04-17 00:21:57 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 17 00:22:27.124: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8951 /api/v1/namespaces/watch-8951/configmaps/e2e-watch-test-configmap-a ffaebec7-cd65-4e98-9e6f-f9c8e2aec0ef 8672326 0 2020-04-17 00:21:57 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 17 00:22:37.132: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8951 /api/v1/namespaces/watch-8951/configmaps/e2e-watch-test-configmap-b cc8c24b7-6be6-44e3-ab80-98947345ba1c 8672356 0 2020-04-17 00:22:37 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 17 00:22:37.132: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8951 /api/v1/namespaces/watch-8951/configmaps/e2e-watch-test-configmap-b cc8c24b7-6be6-44e3-ab80-98947345ba1c 8672356 0 2020-04-17 00:22:37 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Apr 17 00:22:47.139: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8951 /api/v1/namespaces/watch-8951/configmaps/e2e-watch-test-configmap-b cc8c24b7-6be6-44e3-ab80-98947345ba1c 8672386 0 2020-04-17 00:22:37 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 17 00:22:47.139: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8951 /api/v1/namespaces/watch-8951/configmaps/e2e-watch-test-configmap-b cc8c24b7-6be6-44e3-ab80-98947345ba1c 8672386 0 2020-04-17 00:22:37 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:22:57.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8951" for this suite. • [SLOW TEST:60.110 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":175,"skipped":3165,"failed":0} [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:22:57.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 00:22:57.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Apr 17 00:22:57.809: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-17T00:22:57Z generation:1 name:name1 resourceVersion:8672429 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:afb54a02-f640-419c-b90d-bf916f95a299] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Apr 17 00:23:07.814: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-17T00:23:07Z generation:1 name:name2 resourceVersion:8672464 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:4f224c6a-9961-4a5d-89f3-a93542a70e8a] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Apr 17 00:23:17.819: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-17T00:22:57Z generation:2 name:name1 resourceVersion:8672494 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:afb54a02-f640-419c-b90d-bf916f95a299] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Apr 17 00:23:27.825: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-17T00:23:07Z generation:2 name:name2 resourceVersion:8672524 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:4f224c6a-9961-4a5d-89f3-a93542a70e8a] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Apr 17 00:23:37.832: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-17T00:22:57Z generation:2 name:name1 resourceVersion:8672554 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:afb54a02-f640-419c-b90d-bf916f95a299] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Apr 17 00:23:47.841: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-17T00:23:07Z generation:2 name:name2 resourceVersion:8672585 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:4f224c6a-9961-4a5d-89f3-a93542a70e8a] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:23:58.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-5008" for this suite. • [SLOW TEST:61.211 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":176,"skipped":3165,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:23:58.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:24:14.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3375" for this suite. • [SLOW TEST:16.357 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":177,"skipped":3171,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:24:14.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5963.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-5963.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5963.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5963.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-5963.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5963.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 17 00:24:20.878: INFO: DNS probes using dns-5963/dns-test-8c79c738-ed0a-4482-9f16-5b12524e40a0 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:24:21.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5963" for this suite. • [SLOW TEST:6.292 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":178,"skipped":3178,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:24:21.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:24:25.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7986" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":179,"skipped":3194,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:24:26.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 00:24:26.072: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:24:27.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-783" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":275,"completed":180,"skipped":3219,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:24:27.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:24:31.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5620" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":181,"skipped":3221,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:24:31.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 17 00:24:34.367: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:24:34.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1040" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":182,"skipped":3248,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:24:34.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 17 00:24:39.167: INFO: Successfully updated pod "pod-update-d6c8412a-c69f-4404-97cc-038bfb88845f" STEP: verifying the updated pod is in kubernetes Apr 17 00:24:39.193: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:24:39.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8773" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":183,"skipped":3268,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:24:39.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-75f3d6a8-81e8-401e-8e0e-5498b8c9267e STEP: Creating a pod to test consume configMaps Apr 17 00:24:39.283: INFO: Waiting up to 5m0s for pod "pod-configmaps-0676b78e-946f-489c-aeb3-e4ebd16567fd" in namespace "configmap-5493" to be "Succeeded or Failed" Apr 17 00:24:39.287: INFO: Pod "pod-configmaps-0676b78e-946f-489c-aeb3-e4ebd16567fd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.353531ms Apr 17 00:24:41.291: INFO: Pod "pod-configmaps-0676b78e-946f-489c-aeb3-e4ebd16567fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008332528s Apr 17 00:24:43.295: INFO: Pod "pod-configmaps-0676b78e-946f-489c-aeb3-e4ebd16567fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012643048s STEP: Saw pod success Apr 17 00:24:43.295: INFO: Pod "pod-configmaps-0676b78e-946f-489c-aeb3-e4ebd16567fd" satisfied condition "Succeeded or Failed" Apr 17 00:24:43.299: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-0676b78e-946f-489c-aeb3-e4ebd16567fd container configmap-volume-test: STEP: delete the pod Apr 17 00:24:43.317: INFO: Waiting for pod pod-configmaps-0676b78e-946f-489c-aeb3-e4ebd16567fd to disappear Apr 17 00:24:43.322: INFO: Pod pod-configmaps-0676b78e-946f-489c-aeb3-e4ebd16567fd no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:24:43.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5493" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":184,"skipped":3288,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:24:43.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name secret-emptykey-test-7606d00e-472a-46ba-9ac3-fd83746cb3fc [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:24:43.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4185" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":185,"skipped":3296,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:24:43.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-f9b833fe-a4b4-4e0d-934b-82b41ea2f46f STEP: Creating a pod to test consume secrets Apr 17 00:24:43.533: INFO: Waiting up to 5m0s for pod "pod-secrets-01be8661-661c-4039-9a1e-8ea60dafab05" in namespace "secrets-7522" to be "Succeeded or Failed" Apr 17 00:24:43.544: INFO: Pod "pod-secrets-01be8661-661c-4039-9a1e-8ea60dafab05": Phase="Pending", Reason="", readiness=false. Elapsed: 11.800866ms Apr 17 00:24:45.549: INFO: Pod "pod-secrets-01be8661-661c-4039-9a1e-8ea60dafab05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016860693s Apr 17 00:24:47.553: INFO: Pod "pod-secrets-01be8661-661c-4039-9a1e-8ea60dafab05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020926142s STEP: Saw pod success Apr 17 00:24:47.554: INFO: Pod "pod-secrets-01be8661-661c-4039-9a1e-8ea60dafab05" satisfied condition "Succeeded or Failed" Apr 17 00:24:47.556: INFO: Trying to get logs from node latest-worker pod pod-secrets-01be8661-661c-4039-9a1e-8ea60dafab05 container secret-volume-test: STEP: delete the pod Apr 17 00:24:47.756: INFO: Waiting for pod pod-secrets-01be8661-661c-4039-9a1e-8ea60dafab05 to disappear Apr 17 00:24:47.777: INFO: Pod pod-secrets-01be8661-661c-4039-9a1e-8ea60dafab05 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:24:47.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7522" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":186,"skipped":3324,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:24:47.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 17 00:24:47.822: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 17 00:24:47.858: INFO: Waiting for terminating namespaces to be deleted... Apr 17 00:24:47.860: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 17 00:24:47.865: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 17 00:24:47.865: INFO: Container kindnet-cni ready: true, restart count 0 Apr 17 00:24:47.865: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 17 00:24:47.866: INFO: Container kube-proxy ready: true, restart count 0 Apr 17 00:24:47.866: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 17 00:24:47.870: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 17 00:24:47.871: INFO: Container kindnet-cni ready: true, restart count 0 Apr 17 00:24:47.871: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 17 00:24:47.871: INFO: Container kube-proxy ready: true, restart count 0 Apr 17 00:24:47.871: INFO: busybox-scheduling-7c01b859-8abb-4acd-8cf8-189a266aa96b from kubelet-test-5620 started at 2020-04-17 00:24:27 +0000 UTC (1 container statuses recorded) Apr 17 00:24:47.871: INFO: Container busybox-scheduling-7c01b859-8abb-4acd-8cf8-189a266aa96b ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-55e37f33-e519-44c8-a863-48fdf6271e84 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-55e37f33-e519-44c8-a863-48fdf6271e84 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-55e37f33-e519-44c8-a863-48fdf6271e84 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:29:56.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1074" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:308.305 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":187,"skipped":3355,"failed":0} S ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:29:56.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-6e2dec06-6ea0-4034-853f-3bc596caec1a STEP: Creating a pod to test consume secrets Apr 17 00:29:56.252: INFO: Waiting up to 5m0s for pod "pod-secrets-2ff9f850-dd66-452e-82dd-98b05c805b85" in namespace "secrets-5057" to be "Succeeded or Failed" Apr 17 00:29:56.256: INFO: Pod "pod-secrets-2ff9f850-dd66-452e-82dd-98b05c805b85": Phase="Pending", Reason="", readiness=false. Elapsed: 3.983464ms Apr 17 00:29:58.299: INFO: Pod "pod-secrets-2ff9f850-dd66-452e-82dd-98b05c805b85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046961581s Apr 17 00:30:00.316: INFO: Pod "pod-secrets-2ff9f850-dd66-452e-82dd-98b05c805b85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064448451s STEP: Saw pod success Apr 17 00:30:00.316: INFO: Pod "pod-secrets-2ff9f850-dd66-452e-82dd-98b05c805b85" satisfied condition "Succeeded or Failed" Apr 17 00:30:00.319: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-2ff9f850-dd66-452e-82dd-98b05c805b85 container secret-volume-test: STEP: delete the pod Apr 17 00:30:00.353: INFO: Waiting for pod pod-secrets-2ff9f850-dd66-452e-82dd-98b05c805b85 to disappear Apr 17 00:30:00.358: INFO: Pod pod-secrets-2ff9f850-dd66-452e-82dd-98b05c805b85 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:30:00.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5057" for this suite. STEP: Destroying namespace "secret-namespace-8385" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":188,"skipped":3356,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:30:00.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-42d1a5b1-99e7-4722-9e58-f9086451e908 STEP: Creating a pod to test consume configMaps Apr 17 00:30:00.486: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2c8cdc85-3bc9-4909-939a-ea94d46ac93d" in namespace "projected-7687" to be "Succeeded or Failed" Apr 17 00:30:00.496: INFO: Pod "pod-projected-configmaps-2c8cdc85-3bc9-4909-939a-ea94d46ac93d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.689709ms Apr 17 00:30:02.500: INFO: Pod "pod-projected-configmaps-2c8cdc85-3bc9-4909-939a-ea94d46ac93d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013682912s Apr 17 00:30:04.505: INFO: Pod "pod-projected-configmaps-2c8cdc85-3bc9-4909-939a-ea94d46ac93d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018625638s STEP: Saw pod success Apr 17 00:30:04.505: INFO: Pod "pod-projected-configmaps-2c8cdc85-3bc9-4909-939a-ea94d46ac93d" satisfied condition "Succeeded or Failed" Apr 17 00:30:04.508: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-2c8cdc85-3bc9-4909-939a-ea94d46ac93d container projected-configmap-volume-test: STEP: delete the pod Apr 17 00:30:04.575: INFO: Waiting for pod pod-projected-configmaps-2c8cdc85-3bc9-4909-939a-ea94d46ac93d to disappear Apr 17 00:30:04.597: INFO: Pod pod-projected-configmaps-2c8cdc85-3bc9-4909-939a-ea94d46ac93d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:30:04.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7687" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":189,"skipped":3380,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:30:04.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:30:04.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-5733" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":190,"skipped":3409,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:30:04.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-8993 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Apr 17 00:30:04.807: INFO: Found 0 stateful pods, waiting for 3 Apr 17 00:30:14.810: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 17 00:30:14.810: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 17 00:30:14.810: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Apr 17 00:30:14.818: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8993 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 17 00:30:17.518: INFO: stderr: "I0417 00:30:17.383931 2095 log.go:172] (0xc000cf0c60) (0xc00064d720) Create stream\nI0417 00:30:17.383978 2095 log.go:172] (0xc000cf0c60) (0xc00064d720) Stream added, broadcasting: 1\nI0417 00:30:17.387331 2095 log.go:172] (0xc000cf0c60) Reply frame received for 1\nI0417 00:30:17.387371 2095 log.go:172] (0xc000cf0c60) (0xc0005b15e0) Create stream\nI0417 00:30:17.387385 2095 log.go:172] (0xc000cf0c60) (0xc0005b15e0) Stream added, broadcasting: 3\nI0417 00:30:17.388396 2095 log.go:172] (0xc000cf0c60) Reply frame received for 3\nI0417 00:30:17.388431 2095 log.go:172] (0xc000cf0c60) (0xc00030ea00) Create stream\nI0417 00:30:17.388441 2095 log.go:172] (0xc000cf0c60) (0xc00030ea00) Stream added, broadcasting: 5\nI0417 00:30:17.389715 2095 log.go:172] (0xc000cf0c60) Reply frame received for 5\nI0417 00:30:17.470076 2095 log.go:172] (0xc000cf0c60) Data frame received for 5\nI0417 00:30:17.470110 2095 log.go:172] (0xc00030ea00) (5) Data frame handling\nI0417 00:30:17.470135 2095 log.go:172] (0xc00030ea00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0417 00:30:17.512537 2095 log.go:172] (0xc000cf0c60) Data frame received for 3\nI0417 00:30:17.512566 2095 log.go:172] (0xc0005b15e0) (3) Data frame handling\nI0417 00:30:17.512578 2095 log.go:172] (0xc0005b15e0) (3) Data frame sent\nI0417 00:30:17.512584 2095 log.go:172] (0xc000cf0c60) Data frame received for 3\nI0417 00:30:17.512589 2095 log.go:172] (0xc0005b15e0) (3) Data frame handling\nI0417 00:30:17.512614 2095 log.go:172] (0xc000cf0c60) Data frame received for 5\nI0417 00:30:17.512620 2095 log.go:172] (0xc00030ea00) (5) Data frame handling\nI0417 00:30:17.514191 2095 log.go:172] (0xc000cf0c60) Data frame received for 1\nI0417 00:30:17.514203 2095 log.go:172] (0xc00064d720) (1) Data frame handling\nI0417 00:30:17.514216 2095 log.go:172] (0xc00064d720) (1) Data frame sent\nI0417 00:30:17.514311 2095 log.go:172] (0xc000cf0c60) (0xc00064d720) Stream removed, broadcasting: 1\nI0417 00:30:17.514356 2095 log.go:172] (0xc000cf0c60) Go away received\nI0417 00:30:17.514632 2095 log.go:172] (0xc000cf0c60) (0xc00064d720) Stream removed, broadcasting: 1\nI0417 00:30:17.514651 2095 log.go:172] (0xc000cf0c60) (0xc0005b15e0) Stream removed, broadcasting: 3\nI0417 00:30:17.514662 2095 log.go:172] (0xc000cf0c60) (0xc00030ea00) Stream removed, broadcasting: 5\n" Apr 17 00:30:17.518: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 17 00:30:17.518: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 17 00:30:27.574: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Apr 17 00:30:37.632: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8993 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 17 00:30:37.820: INFO: stderr: "I0417 00:30:37.746376 2128 log.go:172] (0xc0007e0a50) (0xc0007da140) Create stream\nI0417 00:30:37.746431 2128 log.go:172] (0xc0007e0a50) (0xc0007da140) Stream added, broadcasting: 1\nI0417 00:30:37.748338 2128 log.go:172] (0xc0007e0a50) Reply frame received for 1\nI0417 00:30:37.748364 2128 log.go:172] (0xc0007e0a50) (0xc0007da1e0) Create stream\nI0417 00:30:37.748371 2128 log.go:172] (0xc0007e0a50) (0xc0007da1e0) Stream added, broadcasting: 3\nI0417 00:30:37.748993 2128 log.go:172] (0xc0007e0a50) Reply frame received for 3\nI0417 00:30:37.749014 2128 log.go:172] (0xc0007e0a50) (0xc000627360) Create stream\nI0417 00:30:37.749022 2128 log.go:172] (0xc0007e0a50) (0xc000627360) Stream added, broadcasting: 5\nI0417 00:30:37.749774 2128 log.go:172] (0xc0007e0a50) Reply frame received for 5\nI0417 00:30:37.814191 2128 log.go:172] (0xc0007e0a50) Data frame received for 3\nI0417 00:30:37.814239 2128 log.go:172] (0xc0007da1e0) (3) Data frame handling\nI0417 00:30:37.814253 2128 log.go:172] (0xc0007da1e0) (3) Data frame sent\nI0417 00:30:37.814263 2128 log.go:172] (0xc0007e0a50) Data frame received for 3\nI0417 00:30:37.814273 2128 log.go:172] (0xc0007da1e0) (3) Data frame handling\nI0417 00:30:37.814317 2128 log.go:172] (0xc0007e0a50) Data frame received for 5\nI0417 00:30:37.814331 2128 log.go:172] (0xc000627360) (5) Data frame handling\nI0417 00:30:37.814354 2128 log.go:172] (0xc000627360) (5) Data frame sent\nI0417 00:30:37.814363 2128 log.go:172] (0xc0007e0a50) Data frame received for 5\nI0417 00:30:37.814367 2128 log.go:172] (0xc000627360) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0417 00:30:37.815707 2128 log.go:172] (0xc0007e0a50) Data frame received for 1\nI0417 00:30:37.815760 2128 log.go:172] (0xc0007da140) (1) Data frame handling\nI0417 00:30:37.815790 2128 log.go:172] (0xc0007da140) (1) Data frame sent\nI0417 00:30:37.815853 2128 log.go:172] (0xc0007e0a50) (0xc0007da140) Stream removed, broadcasting: 1\nI0417 00:30:37.815887 2128 log.go:172] (0xc0007e0a50) Go away received\nI0417 00:30:37.816233 2128 log.go:172] (0xc0007e0a50) (0xc0007da140) Stream removed, broadcasting: 1\nI0417 00:30:37.816253 2128 log.go:172] (0xc0007e0a50) (0xc0007da1e0) Stream removed, broadcasting: 3\nI0417 00:30:37.816264 2128 log.go:172] (0xc0007e0a50) (0xc000627360) Stream removed, broadcasting: 5\n" Apr 17 00:30:37.820: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 17 00:30:37.820: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 17 00:30:57.871: INFO: Waiting for StatefulSet statefulset-8993/ss2 to complete update Apr 17 00:30:57.871: INFO: Waiting for Pod statefulset-8993/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Apr 17 00:31:07.879: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8993 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 17 00:31:08.114: INFO: stderr: "I0417 00:31:08.014005 2148 log.go:172] (0xc0009b2630) (0xc0008e80a0) Create stream\nI0417 00:31:08.014064 2148 log.go:172] (0xc0009b2630) (0xc0008e80a0) Stream added, broadcasting: 1\nI0417 00:31:08.016660 2148 log.go:172] (0xc0009b2630) Reply frame received for 1\nI0417 00:31:08.016706 2148 log.go:172] (0xc0009b2630) (0xc0006cb2c0) Create stream\nI0417 00:31:08.016721 2148 log.go:172] (0xc0009b2630) (0xc0006cb2c0) Stream added, broadcasting: 3\nI0417 00:31:08.017810 2148 log.go:172] (0xc0009b2630) Reply frame received for 3\nI0417 00:31:08.017837 2148 log.go:172] (0xc0009b2630) (0xc0008e8140) Create stream\nI0417 00:31:08.017849 2148 log.go:172] (0xc0009b2630) (0xc0008e8140) Stream added, broadcasting: 5\nI0417 00:31:08.018703 2148 log.go:172] (0xc0009b2630) Reply frame received for 5\nI0417 00:31:08.077042 2148 log.go:172] (0xc0009b2630) Data frame received for 5\nI0417 00:31:08.077069 2148 log.go:172] (0xc0008e8140) (5) Data frame handling\nI0417 00:31:08.077090 2148 log.go:172] (0xc0008e8140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0417 00:31:08.107708 2148 log.go:172] (0xc0009b2630) Data frame received for 3\nI0417 00:31:08.107729 2148 log.go:172] (0xc0006cb2c0) (3) Data frame handling\nI0417 00:31:08.107742 2148 log.go:172] (0xc0006cb2c0) (3) Data frame sent\nI0417 00:31:08.107748 2148 log.go:172] (0xc0009b2630) Data frame received for 3\nI0417 00:31:08.107754 2148 log.go:172] (0xc0006cb2c0) (3) Data frame handling\nI0417 00:31:08.107796 2148 log.go:172] (0xc0009b2630) Data frame received for 5\nI0417 00:31:08.107810 2148 log.go:172] (0xc0008e8140) (5) Data frame handling\nI0417 00:31:08.109642 2148 log.go:172] (0xc0009b2630) Data frame received for 1\nI0417 00:31:08.109651 2148 log.go:172] (0xc0008e80a0) (1) Data frame handling\nI0417 00:31:08.109657 2148 log.go:172] (0xc0008e80a0) (1) Data frame sent\nI0417 00:31:08.109806 2148 log.go:172] (0xc0009b2630) (0xc0008e80a0) Stream removed, broadcasting: 1\nI0417 00:31:08.109970 2148 log.go:172] (0xc0009b2630) Go away received\nI0417 00:31:08.110138 2148 log.go:172] (0xc0009b2630) (0xc0008e80a0) Stream removed, broadcasting: 1\nI0417 00:31:08.110152 2148 log.go:172] (0xc0009b2630) (0xc0006cb2c0) Stream removed, broadcasting: 3\nI0417 00:31:08.110159 2148 log.go:172] (0xc0009b2630) (0xc0008e8140) Stream removed, broadcasting: 5\n" Apr 17 00:31:08.114: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 17 00:31:08.114: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 17 00:31:18.145: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Apr 17 00:31:28.193: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8993 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 17 00:31:28.437: INFO: stderr: "I0417 00:31:28.348762 2169 log.go:172] (0xc000a740b0) (0xc0006cb400) Create stream\nI0417 00:31:28.348815 2169 log.go:172] (0xc000a740b0) (0xc0006cb400) Stream added, broadcasting: 1\nI0417 00:31:28.351159 2169 log.go:172] (0xc000a740b0) Reply frame received for 1\nI0417 00:31:28.351212 2169 log.go:172] (0xc000a740b0) (0xc0006cb5e0) Create stream\nI0417 00:31:28.351232 2169 log.go:172] (0xc000a740b0) (0xc0006cb5e0) Stream added, broadcasting: 3\nI0417 00:31:28.352145 2169 log.go:172] (0xc000a740b0) Reply frame received for 3\nI0417 00:31:28.352181 2169 log.go:172] (0xc000a740b0) (0xc0006cb680) Create stream\nI0417 00:31:28.352190 2169 log.go:172] (0xc000a740b0) (0xc0006cb680) Stream added, broadcasting: 5\nI0417 00:31:28.353085 2169 log.go:172] (0xc000a740b0) Reply frame received for 5\nI0417 00:31:28.430239 2169 log.go:172] (0xc000a740b0) Data frame received for 3\nI0417 00:31:28.430274 2169 log.go:172] (0xc000a740b0) Data frame received for 5\nI0417 00:31:28.430324 2169 log.go:172] (0xc0006cb680) (5) Data frame handling\nI0417 00:31:28.430344 2169 log.go:172] (0xc0006cb680) (5) Data frame sent\nI0417 00:31:28.430356 2169 log.go:172] (0xc000a740b0) Data frame received for 5\nI0417 00:31:28.430369 2169 log.go:172] (0xc0006cb680) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0417 00:31:28.430392 2169 log.go:172] (0xc0006cb5e0) (3) Data frame handling\nI0417 00:31:28.430418 2169 log.go:172] (0xc0006cb5e0) (3) Data frame sent\nI0417 00:31:28.430425 2169 log.go:172] (0xc000a740b0) Data frame received for 3\nI0417 00:31:28.430430 2169 log.go:172] (0xc0006cb5e0) (3) Data frame handling\nI0417 00:31:28.432304 2169 log.go:172] (0xc000a740b0) Data frame received for 1\nI0417 00:31:28.432342 2169 log.go:172] (0xc0006cb400) (1) Data frame handling\nI0417 00:31:28.432366 2169 log.go:172] (0xc0006cb400) (1) Data frame sent\nI0417 00:31:28.432385 2169 log.go:172] (0xc000a740b0) (0xc0006cb400) Stream removed, broadcasting: 1\nI0417 00:31:28.432408 2169 log.go:172] (0xc000a740b0) Go away received\nI0417 00:31:28.432817 2169 log.go:172] (0xc000a740b0) (0xc0006cb400) Stream removed, broadcasting: 1\nI0417 00:31:28.432842 2169 log.go:172] (0xc000a740b0) (0xc0006cb5e0) Stream removed, broadcasting: 3\nI0417 00:31:28.432856 2169 log.go:172] (0xc000a740b0) (0xc0006cb680) Stream removed, broadcasting: 5\n" Apr 17 00:31:28.437: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 17 00:31:28.437: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 17 00:31:38.458: INFO: Waiting for StatefulSet statefulset-8993/ss2 to complete update Apr 17 00:31:38.458: INFO: Waiting for Pod statefulset-8993/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 17 00:31:38.458: INFO: Waiting for Pod statefulset-8993/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 17 00:31:38.458: INFO: Waiting for Pod statefulset-8993/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 17 00:31:48.474: INFO: Waiting for StatefulSet statefulset-8993/ss2 to complete update Apr 17 00:31:48.474: INFO: Waiting for Pod statefulset-8993/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 17 00:31:48.474: INFO: Waiting for Pod statefulset-8993/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 17 00:31:58.466: INFO: Deleting all statefulset in ns statefulset-8993 Apr 17 00:31:58.469: INFO: Scaling statefulset ss2 to 0 Apr 17 00:32:18.497: INFO: Waiting for statefulset status.replicas updated to 0 Apr 17 00:32:18.500: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:32:18.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8993" for this suite. • [SLOW TEST:133.857 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":191,"skipped":3420,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:32:18.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 17 00:32:19.074: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 17 00:32:21.085: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722680339, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722680339, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722680339, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722680339, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 17 00:32:24.113: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:32:36.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9426" for this suite. STEP: Destroying namespace "webhook-9426-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.801 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":192,"skipped":3428,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:32:36.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 17 00:32:36.398: INFO: Waiting up to 5m0s for pod "pod-e067a03c-741a-4811-a456-38413460dae2" in namespace "emptydir-1144" to be "Succeeded or Failed" Apr 17 00:32:36.416: INFO: Pod "pod-e067a03c-741a-4811-a456-38413460dae2": Phase="Pending", Reason="", readiness=false. Elapsed: 17.768635ms Apr 17 00:32:38.420: INFO: Pod "pod-e067a03c-741a-4811-a456-38413460dae2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021850684s Apr 17 00:32:40.424: INFO: Pod "pod-e067a03c-741a-4811-a456-38413460dae2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025653758s STEP: Saw pod success Apr 17 00:32:40.424: INFO: Pod "pod-e067a03c-741a-4811-a456-38413460dae2" satisfied condition "Succeeded or Failed" Apr 17 00:32:40.427: INFO: Trying to get logs from node latest-worker pod pod-e067a03c-741a-4811-a456-38413460dae2 container test-container: STEP: delete the pod Apr 17 00:32:40.558: INFO: Waiting for pod pod-e067a03c-741a-4811-a456-38413460dae2 to disappear Apr 17 00:32:40.561: INFO: Pod pod-e067a03c-741a-4811-a456-38413460dae2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:32:40.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1144" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":193,"skipped":3476,"failed":0} SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:32:40.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 17 00:32:48.731: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 17 00:32:48.740: INFO: Pod pod-with-poststart-exec-hook still exists Apr 17 00:32:50.741: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 17 00:32:50.746: INFO: Pod pod-with-poststart-exec-hook still exists Apr 17 00:32:52.741: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 17 00:32:52.755: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:32:52.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8265" for this suite. • [SLOW TEST:12.201 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":194,"skipped":3481,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:32:52.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0417 00:33:02.853897 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 17 00:33:02.853: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:33:02.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4212" for this suite. • [SLOW TEST:10.088 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":195,"skipped":3515,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:33:02.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 17 00:33:02.954: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d45133c5-df84-4a98-aa26-9d1eb2d0e051" in namespace "downward-api-5678" to be "Succeeded or Failed" Apr 17 00:33:02.966: INFO: Pod "downwardapi-volume-d45133c5-df84-4a98-aa26-9d1eb2d0e051": Phase="Pending", Reason="", readiness=false. Elapsed: 11.94434ms Apr 17 00:33:04.970: INFO: Pod "downwardapi-volume-d45133c5-df84-4a98-aa26-9d1eb2d0e051": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015807527s Apr 17 00:33:06.974: INFO: Pod "downwardapi-volume-d45133c5-df84-4a98-aa26-9d1eb2d0e051": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020111358s STEP: Saw pod success Apr 17 00:33:06.974: INFO: Pod "downwardapi-volume-d45133c5-df84-4a98-aa26-9d1eb2d0e051" satisfied condition "Succeeded or Failed" Apr 17 00:33:06.977: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-d45133c5-df84-4a98-aa26-9d1eb2d0e051 container client-container: STEP: delete the pod Apr 17 00:33:06.994: INFO: Waiting for pod downwardapi-volume-d45133c5-df84-4a98-aa26-9d1eb2d0e051 to disappear Apr 17 00:33:06.998: INFO: Pod downwardapi-volume-d45133c5-df84-4a98-aa26-9d1eb2d0e051 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:33:06.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5678" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":196,"skipped":3516,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:33:07.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 17 00:33:07.546: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 17 00:33:09.564: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722680387, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722680387, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722680387, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722680387, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 17 00:33:12.594: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:33:12.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9215" for this suite. STEP: Destroying namespace "webhook-9215-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.770 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":197,"skipped":3521,"failed":0} SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:33:12.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-4154 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating statefulset ss in namespace statefulset-4154 Apr 17 00:33:12.852: INFO: Found 0 stateful pods, waiting for 1 Apr 17 00:33:22.857: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 17 00:33:22.885: INFO: Deleting all statefulset in ns statefulset-4154 Apr 17 00:33:22.918: INFO: Scaling statefulset ss to 0 Apr 17 00:33:43.028: INFO: Waiting for statefulset status.replicas updated to 0 Apr 17 00:33:43.031: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:33:43.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4154" for this suite. • [SLOW TEST:30.279 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":198,"skipped":3525,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:33:43.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0417 00:34:23.285481 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 17 00:34:23.285: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:34:23.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6832" for this suite. • [SLOW TEST:40.238 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":199,"skipped":3526,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:34:23.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:34:53.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6202" for this suite. • [SLOW TEST:29.881 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":200,"skipped":3546,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:34:53.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-6e75bb45-bae4-443a-8613-3f450670b6d7 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-6e75bb45-bae4-443a-8613-3f450670b6d7 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:36:15.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-911" for this suite. • [SLOW TEST:82.649 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":201,"skipped":3560,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:36:15.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-4721 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4721 STEP: creating replication controller externalsvc in namespace services-4721 I0417 00:36:15.963608 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-4721, replica count: 2 I0417 00:36:19.014062 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0417 00:36:22.014360 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Apr 17 00:36:22.043: INFO: Creating new exec pod Apr 17 00:36:26.069: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-4721 execpod7vffg -- /bin/sh -x -c nslookup clusterip-service' Apr 17 00:36:26.328: INFO: stderr: "I0417 00:36:26.213016 2191 log.go:172] (0xc0009a0a50) (0xc00047e320) Create stream\nI0417 00:36:26.213081 2191 log.go:172] (0xc0009a0a50) (0xc00047e320) Stream added, broadcasting: 1\nI0417 00:36:26.215649 2191 log.go:172] (0xc0009a0a50) Reply frame received for 1\nI0417 00:36:26.215691 2191 log.go:172] (0xc0009a0a50) (0xc0005aae60) Create stream\nI0417 00:36:26.215704 2191 log.go:172] (0xc0009a0a50) (0xc0005aae60) Stream added, broadcasting: 3\nI0417 00:36:26.216575 2191 log.go:172] (0xc0009a0a50) Reply frame received for 3\nI0417 00:36:26.216613 2191 log.go:172] (0xc0009a0a50) (0xc0005d9220) Create stream\nI0417 00:36:26.216634 2191 log.go:172] (0xc0009a0a50) (0xc0005d9220) Stream added, broadcasting: 5\nI0417 00:36:26.217515 2191 log.go:172] (0xc0009a0a50) Reply frame received for 5\nI0417 00:36:26.313900 2191 log.go:172] (0xc0009a0a50) Data frame received for 5\nI0417 00:36:26.313932 2191 log.go:172] (0xc0005d9220) (5) Data frame handling\nI0417 00:36:26.313952 2191 log.go:172] (0xc0005d9220) (5) Data frame sent\n+ nslookup clusterip-service\nI0417 00:36:26.319054 2191 log.go:172] (0xc0009a0a50) Data frame received for 3\nI0417 00:36:26.319102 2191 log.go:172] (0xc0005aae60) (3) Data frame handling\nI0417 00:36:26.319131 2191 log.go:172] (0xc0005aae60) (3) Data frame sent\nI0417 00:36:26.320535 2191 log.go:172] (0xc0009a0a50) Data frame received for 3\nI0417 00:36:26.320574 2191 log.go:172] (0xc0005aae60) (3) Data frame handling\nI0417 00:36:26.320606 2191 log.go:172] (0xc0005aae60) (3) Data frame sent\nI0417 00:36:26.321066 2191 log.go:172] (0xc0009a0a50) Data frame received for 5\nI0417 00:36:26.321090 2191 log.go:172] (0xc0005d9220) (5) Data frame handling\nI0417 00:36:26.321439 2191 log.go:172] (0xc0009a0a50) Data frame received for 3\nI0417 00:36:26.321465 2191 log.go:172] (0xc0005aae60) (3) Data frame handling\nI0417 00:36:26.323286 2191 log.go:172] (0xc0009a0a50) Data frame received for 1\nI0417 00:36:26.323305 2191 log.go:172] (0xc00047e320) (1) Data frame handling\nI0417 00:36:26.323316 2191 log.go:172] (0xc00047e320) (1) Data frame sent\nI0417 00:36:26.323338 2191 log.go:172] (0xc0009a0a50) (0xc00047e320) Stream removed, broadcasting: 1\nI0417 00:36:26.323373 2191 log.go:172] (0xc0009a0a50) Go away received\nI0417 00:36:26.323879 2191 log.go:172] (0xc0009a0a50) (0xc00047e320) Stream removed, broadcasting: 1\nI0417 00:36:26.323909 2191 log.go:172] (0xc0009a0a50) (0xc0005aae60) Stream removed, broadcasting: 3\nI0417 00:36:26.323931 2191 log.go:172] (0xc0009a0a50) (0xc0005d9220) Stream removed, broadcasting: 5\n" Apr 17 00:36:26.329: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-4721.svc.cluster.local\tcanonical name = externalsvc.services-4721.svc.cluster.local.\nName:\texternalsvc.services-4721.svc.cluster.local\nAddress: 10.96.160.72\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4721, will wait for the garbage collector to delete the pods Apr 17 00:36:26.388: INFO: Deleting ReplicationController externalsvc took: 6.192532ms Apr 17 00:36:26.688: INFO: Terminating ReplicationController externalsvc pods took: 300.219011ms Apr 17 00:36:31.410: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:36:31.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4721" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:15.641 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":202,"skipped":3579,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:36:31.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Starting the proxy Apr 17 00:36:31.510: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix520906527/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:36:31.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8298" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":275,"completed":203,"skipped":3613,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:36:31.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 00:36:31.653: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config version' Apr 17 00:36:31.780: INFO: stderr: "" Apr 17 00:36:31.780: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.0.779+84dc7046797aad\", GitCommit:\"84dc7046797aad80f258b6740a98e79199c8bb4d\", GitTreeState:\"clean\", BuildDate:\"2020-03-15T16:56:42Z\", GoVersion:\"go1.13.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:09:19Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:36:31.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2067" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":275,"completed":204,"skipped":3624,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:36:31.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test env composition Apr 17 00:36:31.874: INFO: Waiting up to 5m0s for pod "var-expansion-bb0e22fa-cce8-43b9-a427-d73181ac1317" in namespace "var-expansion-7562" to be "Succeeded or Failed" Apr 17 00:36:31.877: INFO: Pod "var-expansion-bb0e22fa-cce8-43b9-a427-d73181ac1317": Phase="Pending", Reason="", readiness=false. Elapsed: 3.499104ms Apr 17 00:36:33.883: INFO: Pod "var-expansion-bb0e22fa-cce8-43b9-a427-d73181ac1317": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009600976s Apr 17 00:36:35.888: INFO: Pod "var-expansion-bb0e22fa-cce8-43b9-a427-d73181ac1317": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013945914s STEP: Saw pod success Apr 17 00:36:35.888: INFO: Pod "var-expansion-bb0e22fa-cce8-43b9-a427-d73181ac1317" satisfied condition "Succeeded or Failed" Apr 17 00:36:35.891: INFO: Trying to get logs from node latest-worker2 pod var-expansion-bb0e22fa-cce8-43b9-a427-d73181ac1317 container dapi-container: STEP: delete the pod Apr 17 00:36:35.958: INFO: Waiting for pod var-expansion-bb0e22fa-cce8-43b9-a427-d73181ac1317 to disappear Apr 17 00:36:35.964: INFO: Pod var-expansion-bb0e22fa-cce8-43b9-a427-d73181ac1317 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:36:35.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7562" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":205,"skipped":3648,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:36:35.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:36:41.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3722" for this suite. • [SLOW TEST:5.095 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":206,"skipped":3661,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:36:41.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-xf7wd in namespace proxy-2256 I0417 00:36:41.191895 7 runners.go:190] Created replication controller with name: proxy-service-xf7wd, namespace: proxy-2256, replica count: 1 I0417 00:36:42.242360 7 runners.go:190] proxy-service-xf7wd Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0417 00:36:43.242546 7 runners.go:190] proxy-service-xf7wd Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0417 00:36:44.242802 7 runners.go:190] proxy-service-xf7wd Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0417 00:36:45.243037 7 runners.go:190] proxy-service-xf7wd Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0417 00:36:46.243240 7 runners.go:190] proxy-service-xf7wd Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0417 00:36:47.243467 7 runners.go:190] proxy-service-xf7wd Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0417 00:36:48.243725 7 runners.go:190] proxy-service-xf7wd Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0417 00:36:49.243942 7 runners.go:190] proxy-service-xf7wd Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 17 00:36:49.248: INFO: setup took 8.084574469s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Apr 17 00:36:49.254: INFO: (0) /api/v1/namespaces/proxy-2256/pods/http:proxy-service-xf7wd-ztv76:160/proxy/: foo (200; 5.875843ms) Apr 17 00:36:49.254: INFO: (0) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:160/proxy/: foo (200; 6.516035ms) Apr 17 00:36:49.255: INFO: (0) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:162/proxy/: bar (200; 7.109042ms) Apr 17 00:36:49.258: INFO: (0) /api/v1/namespaces/proxy-2256/services/proxy-service-xf7wd:portname1/proxy/: foo (200; 9.927964ms) Apr 17 00:36:49.258: INFO: (0) /api/v1/namespaces/proxy-2256/services/http:proxy-service-xf7wd:portname2/proxy/: bar (200; 10.041741ms) Apr 17 00:36:49.260: INFO: (0) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76/proxy/: test (200; 12.013045ms) Apr 17 00:36:49.260: INFO: (0) /api/v1/namespaces/proxy-2256/pods/http:proxy-service-xf7wd-ztv76:162/proxy/: bar (200; 12.101982ms) Apr 17 00:36:49.260: INFO: (0) /api/v1/namespaces/proxy-2256/pods/http:proxy-service-xf7wd-ztv76:1080/proxy/: ... (200; 12.040036ms) Apr 17 00:36:49.260: INFO: (0) /api/v1/namespaces/proxy-2256/services/proxy-service-xf7wd:portname2/proxy/: bar (200; 12.175307ms) Apr 17 00:36:49.260: INFO: (0) /api/v1/namespaces/proxy-2256/services/http:proxy-service-xf7wd:portname1/proxy/: foo (200; 12.073239ms) Apr 17 00:36:49.260: INFO: (0) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:1080/proxy/: test<... (200; 12.765137ms) Apr 17 00:36:49.263: INFO: (0) /api/v1/namespaces/proxy-2256/services/https:proxy-service-xf7wd:tlsportname2/proxy/: tls qux (200; 15.064208ms) Apr 17 00:36:49.263: INFO: (0) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:462/proxy/: tls qux (200; 15.023107ms) Apr 17 00:36:49.266: INFO: (0) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:443/proxy/: ... (200; 5.871103ms) Apr 17 00:36:49.275: INFO: (1) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:1080/proxy/: test<... (200; 6.394458ms) Apr 17 00:36:49.275: INFO: (1) /api/v1/namespaces/proxy-2256/pods/http:proxy-service-xf7wd-ztv76:160/proxy/: foo (200; 6.576816ms) Apr 17 00:36:49.275: INFO: (1) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76/proxy/: test (200; 6.541605ms) Apr 17 00:36:49.275: INFO: (1) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:443/proxy/: test<... (200; 3.46466ms) Apr 17 00:36:49.279: INFO: (2) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76/proxy/: test (200; 3.848799ms) Apr 17 00:36:49.279: INFO: (2) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:443/proxy/: ... (200; 4.297972ms) Apr 17 00:36:49.280: INFO: (2) /api/v1/namespaces/proxy-2256/services/https:proxy-service-xf7wd:tlsportname2/proxy/: tls qux (200; 4.550814ms) Apr 17 00:36:49.280: INFO: (2) /api/v1/namespaces/proxy-2256/services/http:proxy-service-xf7wd:portname2/proxy/: bar (200; 4.887809ms) Apr 17 00:36:49.281: INFO: (2) /api/v1/namespaces/proxy-2256/services/https:proxy-service-xf7wd:tlsportname1/proxy/: tls baz (200; 4.976643ms) Apr 17 00:36:49.281: INFO: (2) /api/v1/namespaces/proxy-2256/services/proxy-service-xf7wd:portname2/proxy/: bar (200; 4.985513ms) Apr 17 00:36:49.283: INFO: (3) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:1080/proxy/: test<... (200; 2.680302ms) Apr 17 00:36:49.284: INFO: (3) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:443/proxy/: ... (200; 4.10774ms) Apr 17 00:36:49.285: INFO: (3) /api/v1/namespaces/proxy-2256/pods/http:proxy-service-xf7wd-ztv76:162/proxy/: bar (200; 4.120161ms) Apr 17 00:36:49.285: INFO: (3) /api/v1/namespaces/proxy-2256/pods/http:proxy-service-xf7wd-ztv76:160/proxy/: foo (200; 4.167713ms) Apr 17 00:36:49.285: INFO: (3) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:460/proxy/: tls baz (200; 4.219837ms) Apr 17 00:36:49.285: INFO: (3) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:462/proxy/: tls qux (200; 4.328537ms) Apr 17 00:36:49.285: INFO: (3) /api/v1/namespaces/proxy-2256/services/proxy-service-xf7wd:portname2/proxy/: bar (200; 4.569162ms) Apr 17 00:36:49.285: INFO: (3) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76/proxy/: test (200; 4.349332ms) Apr 17 00:36:49.285: INFO: (3) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:162/proxy/: bar (200; 4.44664ms) Apr 17 00:36:49.287: INFO: (3) /api/v1/namespaces/proxy-2256/services/https:proxy-service-xf7wd:tlsportname1/proxy/: tls baz (200; 6.10041ms) Apr 17 00:36:49.287: INFO: (3) /api/v1/namespaces/proxy-2256/services/http:proxy-service-xf7wd:portname1/proxy/: foo (200; 5.997286ms) Apr 17 00:36:49.287: INFO: (3) /api/v1/namespaces/proxy-2256/services/proxy-service-xf7wd:portname1/proxy/: foo (200; 5.932682ms) Apr 17 00:36:49.287: INFO: (3) /api/v1/namespaces/proxy-2256/services/http:proxy-service-xf7wd:portname2/proxy/: bar (200; 6.004757ms) Apr 17 00:36:49.287: INFO: (3) /api/v1/namespaces/proxy-2256/services/https:proxy-service-xf7wd:tlsportname2/proxy/: tls qux (200; 5.964665ms) Apr 17 00:36:49.290: INFO: (4) /api/v1/namespaces/proxy-2256/pods/http:proxy-service-xf7wd-ztv76:162/proxy/: bar (200; 3.401573ms) Apr 17 00:36:49.290: INFO: (4) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:160/proxy/: foo (200; 3.528379ms) Apr 17 00:36:49.290: INFO: (4) /api/v1/namespaces/proxy-2256/pods/http:proxy-service-xf7wd-ztv76:1080/proxy/: ... (200; 3.568734ms) Apr 17 00:36:49.291: INFO: (4) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:462/proxy/: tls qux (200; 3.612376ms) Apr 17 00:36:49.291: INFO: (4) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:443/proxy/: test<... (200; 3.89029ms) Apr 17 00:36:49.291: INFO: (4) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:460/proxy/: tls baz (200; 4.060608ms) Apr 17 00:36:49.291: INFO: (4) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:162/proxy/: bar (200; 4.035416ms) Apr 17 00:36:49.292: INFO: (4) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76/proxy/: test (200; 4.839455ms) Apr 17 00:36:49.292: INFO: (4) /api/v1/namespaces/proxy-2256/services/http:proxy-service-xf7wd:portname2/proxy/: bar (200; 5.222892ms) Apr 17 00:36:49.292: INFO: (4) /api/v1/namespaces/proxy-2256/services/https:proxy-service-xf7wd:tlsportname2/proxy/: tls qux (200; 5.217409ms) Apr 17 00:36:49.292: INFO: (4) /api/v1/namespaces/proxy-2256/services/https:proxy-service-xf7wd:tlsportname1/proxy/: tls baz (200; 5.288937ms) Apr 17 00:36:49.292: INFO: (4) /api/v1/namespaces/proxy-2256/services/proxy-service-xf7wd:portname2/proxy/: bar (200; 5.299764ms) Apr 17 00:36:49.292: INFO: (4) /api/v1/namespaces/proxy-2256/services/http:proxy-service-xf7wd:portname1/proxy/: foo (200; 5.455285ms) Apr 17 00:36:49.292: INFO: (4) /api/v1/namespaces/proxy-2256/services/proxy-service-xf7wd:portname1/proxy/: foo (200; 5.467818ms) Apr 17 00:36:49.296: INFO: (5) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:460/proxy/: tls baz (200; 3.104866ms) Apr 17 00:36:49.296: INFO: (5) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:162/proxy/: bar (200; 3.061393ms) Apr 17 00:36:49.296: INFO: (5) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76/proxy/: test (200; 3.800066ms) Apr 17 00:36:49.296: INFO: (5) /api/v1/namespaces/proxy-2256/pods/http:proxy-service-xf7wd-ztv76:162/proxy/: bar (200; 3.917143ms) Apr 17 00:36:49.296: INFO: (5) /api/v1/namespaces/proxy-2256/pods/http:proxy-service-xf7wd-ztv76:1080/proxy/: ... (200; 3.89815ms) Apr 17 00:36:49.297: INFO: (5) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:462/proxy/: tls qux (200; 4.212599ms) Apr 17 00:36:49.297: INFO: (5) /api/v1/namespaces/proxy-2256/services/https:proxy-service-xf7wd:tlsportname2/proxy/: tls qux (200; 4.53799ms) Apr 17 00:36:49.297: INFO: (5) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:1080/proxy/: test<... (200; 4.523174ms) Apr 17 00:36:49.297: INFO: (5) /api/v1/namespaces/proxy-2256/pods/http:proxy-service-xf7wd-ztv76:160/proxy/: foo (200; 4.548573ms) Apr 17 00:36:49.297: INFO: (5) /api/v1/namespaces/proxy-2256/services/proxy-service-xf7wd:portname2/proxy/: bar (200; 4.606485ms) Apr 17 00:36:49.297: INFO: (5) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:160/proxy/: foo (200; 4.831024ms) Apr 17 00:36:49.297: INFO: (5) /api/v1/namespaces/proxy-2256/services/proxy-service-xf7wd:portname1/proxy/: foo (200; 4.81609ms) Apr 17 00:36:49.298: INFO: (5) /api/v1/namespaces/proxy-2256/services/http:proxy-service-xf7wd:portname2/proxy/: bar (200; 5.274165ms) Apr 17 00:36:49.298: INFO: (5) /api/v1/namespaces/proxy-2256/services/https:proxy-service-xf7wd:tlsportname1/proxy/: tls baz (200; 5.275037ms) Apr 17 00:36:49.298: INFO: (5) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:443/proxy/: test (200; 8.333296ms) Apr 17 00:36:49.306: INFO: (6) /api/v1/namespaces/proxy-2256/pods/http:proxy-service-xf7wd-ztv76:1080/proxy/: ... (200; 8.386615ms) Apr 17 00:36:49.306: INFO: (6) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:1080/proxy/: test<... (200; 8.386894ms) Apr 17 00:36:49.306: INFO: (6) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:162/proxy/: bar (200; 8.354429ms) Apr 17 00:36:49.306: INFO: (6) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:462/proxy/: tls qux (200; 8.415158ms) Apr 17 00:36:49.306: INFO: (6) /api/v1/namespaces/proxy-2256/pods/http:proxy-service-xf7wd-ztv76:162/proxy/: bar (200; 8.489824ms) Apr 17 00:36:49.308: INFO: (6) /api/v1/namespaces/proxy-2256/services/proxy-service-xf7wd:portname1/proxy/: foo (200; 10.256104ms) Apr 17 00:36:49.309: INFO: (6) /api/v1/namespaces/proxy-2256/services/proxy-service-xf7wd:portname2/proxy/: bar (200; 10.508858ms) Apr 17 00:36:49.309: INFO: (6) /api/v1/namespaces/proxy-2256/services/https:proxy-service-xf7wd:tlsportname2/proxy/: tls qux (200; 10.990649ms) Apr 17 00:36:49.309: INFO: (6) /api/v1/namespaces/proxy-2256/services/http:proxy-service-xf7wd:portname2/proxy/: bar (200; 11.119394ms) Apr 17 00:36:49.309: INFO: (6) /api/v1/namespaces/proxy-2256/services/http:proxy-service-xf7wd:portname1/proxy/: foo (200; 11.325401ms) Apr 17 00:36:49.310: INFO: (6) /api/v1/namespaces/proxy-2256/services/https:proxy-service-xf7wd:tlsportname1/proxy/: tls baz (200; 11.502587ms) Apr 17 00:36:49.312: INFO: (7) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76/proxy/: test (200; 2.631717ms) Apr 17 00:36:49.312: INFO: (7) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:462/proxy/: tls qux (200; 2.734104ms) Apr 17 00:36:49.313: INFO: (7) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:460/proxy/: tls baz (200; 3.0582ms) Apr 17 00:36:49.313: INFO: (7) /api/v1/namespaces/proxy-2256/pods/http:proxy-service-xf7wd-ztv76:162/proxy/: bar (200; 3.440823ms) Apr 17 00:36:49.313: INFO: (7) /api/v1/namespaces/proxy-2256/pods/http:proxy-service-xf7wd-ztv76:160/proxy/: foo (200; 3.469577ms) Apr 17 00:36:49.313: INFO: (7) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:162/proxy/: bar (200; 3.498406ms) Apr 17 00:36:49.313: INFO: (7) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:443/proxy/: ... (200; 3.532002ms) Apr 17 00:36:49.313: INFO: (7) /api/v1/namespaces/proxy-2256/services/http:proxy-service-xf7wd:portname1/proxy/: foo (200; 3.633345ms) Apr 17 00:36:49.313: INFO: (7) /api/v1/namespaces/proxy-2256/services/proxy-service-xf7wd:portname2/proxy/: bar (200; 3.634555ms) Apr 17 00:36:49.313: INFO: (7) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:160/proxy/: foo (200; 3.563285ms) Apr 17 00:36:49.313: INFO: (7) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:1080/proxy/: test<... (200; 3.823179ms) Apr 17 00:36:49.315: INFO: (7) /api/v1/namespaces/proxy-2256/services/http:proxy-service-xf7wd:portname2/proxy/: bar (200; 4.770762ms) Apr 17 00:36:49.315: INFO: (7) /api/v1/namespaces/proxy-2256/services/https:proxy-service-xf7wd:tlsportname2/proxy/: tls qux (200; 4.817223ms) Apr 17 00:36:49.315: INFO: (7) /api/v1/namespaces/proxy-2256/services/proxy-service-xf7wd:portname1/proxy/: foo (200; 5.286345ms) Apr 17 00:36:49.315: INFO: (7) /api/v1/namespaces/proxy-2256/services/https:proxy-service-xf7wd:tlsportname1/proxy/: tls baz (200; 5.378381ms) Apr 17 00:36:49.318: INFO: (8) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:460/proxy/: tls baz (200; 2.757222ms) Apr 17 00:36:49.318: INFO: (8) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:160/proxy/: foo (200; 3.035525ms) Apr 17 00:36:49.319: INFO: (8) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:462/proxy/: tls qux (200; 3.313075ms) Apr 17 00:36:49.320: INFO: (8) /api/v1/namespaces/proxy-2256/services/https:proxy-service-xf7wd:tlsportname2/proxy/: tls qux (200; 4.287928ms) Apr 17 00:36:49.320: INFO: (8) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:1080/proxy/: test<... (200; 4.924313ms) Apr 17 00:36:49.320: INFO: (8) /api/v1/namespaces/proxy-2256/pods/http:proxy-service-xf7wd-ztv76:160/proxy/: foo (200; 4.958925ms) Apr 17 00:36:49.320: INFO: (8) /api/v1/namespaces/proxy-2256/services/proxy-service-xf7wd:portname1/proxy/: foo (200; 4.951304ms) Apr 17 00:36:49.320: INFO: (8) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:443/proxy/: test (200; 5.219257ms) Apr 17 00:36:49.321: INFO: (8) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:162/proxy/: bar (200; 5.225817ms) Apr 17 00:36:49.321: INFO: (8) /api/v1/namespaces/proxy-2256/services/http:proxy-service-xf7wd:portname2/proxy/: bar (200; 5.227541ms) Apr 17 00:36:49.321: INFO: (8) /api/v1/namespaces/proxy-2256/pods/http:proxy-service-xf7wd-ztv76:1080/proxy/: ... (200; 5.241134ms) Apr 17 00:36:49.321: INFO: (8) /api/v1/namespaces/proxy-2256/pods/http:proxy-service-xf7wd-ztv76:162/proxy/: bar (200; 5.333046ms) Apr 17 00:36:49.321: INFO: (8) /api/v1/namespaces/proxy-2256/services/proxy-service-xf7wd:portname2/proxy/: bar (200; 5.342424ms) Apr 17 00:36:49.321: INFO: (8) /api/v1/namespaces/proxy-2256/services/http:proxy-service-xf7wd:portname1/proxy/: foo (200; 5.273199ms) Apr 17 00:36:49.321: INFO: (8) /api/v1/namespaces/proxy-2256/services/https:proxy-service-xf7wd:tlsportname1/proxy/: tls baz (200; 5.783621ms) Apr 17 00:36:49.325: INFO: (9) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:443/proxy/: ... (200; 4.587882ms) Apr 17 00:36:49.326: INFO: (9) /api/v1/namespaces/proxy-2256/services/proxy-service-xf7wd:portname2/proxy/: bar (200; 4.628232ms) Apr 17 00:36:49.326: INFO: (9) /api/v1/namespaces/proxy-2256/services/http:proxy-service-xf7wd:portname1/proxy/: foo (200; 4.640721ms) Apr 17 00:36:49.326: INFO: (9) /api/v1/namespaces/proxy-2256/services/https:proxy-service-xf7wd:tlsportname2/proxy/: tls qux (200; 4.691901ms) Apr 17 00:36:49.326: INFO: (9) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:1080/proxy/: test<... (200; 4.703837ms) Apr 17 00:36:49.326: INFO: (9) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:160/proxy/: foo (200; 4.936413ms) Apr 17 00:36:49.326: INFO: (9) /api/v1/namespaces/proxy-2256/pods/http:proxy-service-xf7wd-ztv76:162/proxy/: bar (200; 4.986854ms) Apr 17 00:36:49.326: INFO: (9) /api/v1/namespaces/proxy-2256/services/http:proxy-service-xf7wd:portname2/proxy/: bar (200; 4.936855ms) Apr 17 00:36:49.326: INFO: (9) /api/v1/namespaces/proxy-2256/services/https:proxy-service-xf7wd:tlsportname1/proxy/: tls baz (200; 4.9833ms) Apr 17 00:36:49.326: INFO: (9) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76/proxy/: test (200; 5.060537ms) Apr 17 00:36:49.326: INFO: (9) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:460/proxy/: tls baz (200; 5.025172ms) Apr 17 00:36:49.326: INFO: (9) /api/v1/namespaces/proxy-2256/services/proxy-service-xf7wd:portname1/proxy/: foo (200; 5.002701ms) Apr 17 00:36:49.329: INFO: (10) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:162/proxy/: bar (200; 2.494146ms) Apr 17 00:36:49.329: INFO: (10) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:462/proxy/: tls qux (200; 2.478198ms) Apr 17 00:36:49.329: INFO: (10) /api/v1/namespaces/proxy-2256/pods/http:proxy-service-xf7wd-ztv76:1080/proxy/: ... (200; 2.545396ms) Apr 17 00:36:49.331: INFO: (10) /api/v1/namespaces/proxy-2256/pods/http:proxy-service-xf7wd-ztv76:162/proxy/: bar (200; 4.393275ms) Apr 17 00:36:49.331: INFO: (10) /api/v1/namespaces/proxy-2256/pods/http:proxy-service-xf7wd-ztv76:160/proxy/: foo (200; 4.4182ms) Apr 17 00:36:49.331: INFO: (10) /api/v1/namespaces/proxy-2256/services/http:proxy-service-xf7wd:portname2/proxy/: bar (200; 4.823123ms) Apr 17 00:36:49.331: INFO: (10) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76/proxy/: test (200; 4.935277ms) Apr 17 00:36:49.332: INFO: (10) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:1080/proxy/: test<... (200; 5.147192ms) Apr 17 00:36:49.332: INFO: (10) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:443/proxy/: ... (200; 3.449986ms) Apr 17 00:36:49.336: INFO: (11) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:460/proxy/: tls baz (200; 4.082103ms) Apr 17 00:36:49.336: INFO: (11) /api/v1/namespaces/proxy-2256/services/proxy-service-xf7wd:portname1/proxy/: foo (200; 4.057933ms) Apr 17 00:36:49.336: INFO: (11) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:1080/proxy/: test<... (200; 4.01281ms) Apr 17 00:36:49.336: INFO: (11) /api/v1/namespaces/proxy-2256/services/http:proxy-service-xf7wd:portname1/proxy/: foo (200; 4.085647ms) Apr 17 00:36:49.336: INFO: (11) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76/proxy/: test (200; 4.065013ms) Apr 17 00:36:49.336: INFO: (11) /api/v1/namespaces/proxy-2256/services/http:proxy-service-xf7wd:portname2/proxy/: bar (200; 4.141819ms) Apr 17 00:36:49.336: INFO: (11) /api/v1/namespaces/proxy-2256/services/https:proxy-service-xf7wd:tlsportname2/proxy/: tls qux (200; 4.113279ms) Apr 17 00:36:49.336: INFO: (11) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:443/proxy/: test<... (200; 4.441786ms) Apr 17 00:36:49.341: INFO: (12) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:443/proxy/: test (200; 4.542509ms) Apr 17 00:36:49.341: INFO: (12) /api/v1/namespaces/proxy-2256/pods/http:proxy-service-xf7wd-ztv76:160/proxy/: foo (200; 4.569604ms) Apr 17 00:36:49.341: INFO: (12) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:160/proxy/: foo (200; 4.406636ms) Apr 17 00:36:49.341: INFO: (12) /api/v1/namespaces/proxy-2256/services/proxy-service-xf7wd:portname2/proxy/: bar (200; 4.599294ms) Apr 17 00:36:49.341: INFO: (12) /api/v1/namespaces/proxy-2256/pods/http:proxy-service-xf7wd-ztv76:1080/proxy/: ... (200; 4.626554ms) Apr 17 00:36:49.341: INFO: (12) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:460/proxy/: tls baz (200; 4.831834ms) Apr 17 00:36:49.341: INFO: (12) /api/v1/namespaces/proxy-2256/services/http:proxy-service-xf7wd:portname2/proxy/: bar (200; 4.873001ms) Apr 17 00:36:49.341: INFO: (12) /api/v1/namespaces/proxy-2256/pods/http:proxy-service-xf7wd-ztv76:162/proxy/: bar (200; 4.835761ms) Apr 17 00:36:49.342: INFO: (12) /api/v1/namespaces/proxy-2256/services/http:proxy-service-xf7wd:portname1/proxy/: foo (200; 4.964114ms) Apr 17 00:36:49.342: INFO: (12) /api/v1/namespaces/proxy-2256/services/proxy-service-xf7wd:portname1/proxy/: foo (200; 5.025737ms) Apr 17 00:36:49.342: INFO: (12) /api/v1/namespaces/proxy-2256/services/https:proxy-service-xf7wd:tlsportname2/proxy/: tls qux (200; 5.088299ms) Apr 17 00:36:49.342: INFO: (12) /api/v1/namespaces/proxy-2256/services/https:proxy-service-xf7wd:tlsportname1/proxy/: tls baz (200; 5.22142ms) Apr 17 00:36:49.344: INFO: (13) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:460/proxy/: tls baz (200; 1.751323ms) Apr 17 00:36:49.346: INFO: (13) /api/v1/namespaces/proxy-2256/pods/http:proxy-service-xf7wd-ztv76:160/proxy/: foo (200; 4.35969ms) Apr 17 00:36:49.346: INFO: (13) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:1080/proxy/: test<... (200; 4.39472ms) Apr 17 00:36:49.346: INFO: (13) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76/proxy/: test (200; 4.367335ms) Apr 17 00:36:49.346: INFO: (13) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:443/proxy/: ... (200; 4.512706ms) Apr 17 00:36:49.346: INFO: (13) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:162/proxy/: bar (200; 4.531806ms) Apr 17 00:36:49.346: INFO: (13) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:160/proxy/: foo (200; 4.556879ms) Apr 17 00:36:49.347: INFO: (13) /api/v1/namespaces/proxy-2256/services/http:proxy-service-xf7wd:portname1/proxy/: foo (200; 4.960625ms) Apr 17 00:36:49.347: INFO: (13) /api/v1/namespaces/proxy-2256/services/proxy-service-xf7wd:portname2/proxy/: bar (200; 5.098173ms) Apr 17 00:36:49.347: INFO: (13) /api/v1/namespaces/proxy-2256/services/http:proxy-service-xf7wd:portname2/proxy/: bar (200; 5.089088ms) Apr 17 00:36:49.347: INFO: (13) /api/v1/namespaces/proxy-2256/services/https:proxy-service-xf7wd:tlsportname2/proxy/: tls qux (200; 5.056276ms) Apr 17 00:36:49.347: INFO: (13) /api/v1/namespaces/proxy-2256/services/proxy-service-xf7wd:portname1/proxy/: foo (200; 5.132663ms) Apr 17 00:36:49.347: INFO: (13) /api/v1/namespaces/proxy-2256/services/https:proxy-service-xf7wd:tlsportname1/proxy/: tls baz (200; 5.195181ms) Apr 17 00:36:49.351: INFO: (14) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:160/proxy/: foo (200; 3.913924ms) Apr 17 00:36:49.351: INFO: (14) /api/v1/namespaces/proxy-2256/pods/http:proxy-service-xf7wd-ztv76:160/proxy/: foo (200; 4.193077ms) Apr 17 00:36:49.352: INFO: (14) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:443/proxy/: test (200; 4.273055ms) Apr 17 00:36:49.352: INFO: (14) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:162/proxy/: bar (200; 4.557768ms) Apr 17 00:36:49.352: INFO: (14) /api/v1/namespaces/proxy-2256/services/https:proxy-service-xf7wd:tlsportname2/proxy/: tls qux (200; 4.558813ms) Apr 17 00:36:49.352: INFO: (14) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:462/proxy/: tls qux (200; 4.558521ms) Apr 17 00:36:49.352: INFO: (14) /api/v1/namespaces/proxy-2256/pods/http:proxy-service-xf7wd-ztv76:162/proxy/: bar (200; 4.562313ms) Apr 17 00:36:49.352: INFO: (14) /api/v1/namespaces/proxy-2256/services/https:proxy-service-xf7wd:tlsportname1/proxy/: tls baz (200; 4.647372ms) Apr 17 00:36:49.352: INFO: (14) /api/v1/namespaces/proxy-2256/pods/http:proxy-service-xf7wd-ztv76:1080/proxy/: ... (200; 4.732899ms) Apr 17 00:36:49.352: INFO: (14) /api/v1/namespaces/proxy-2256/services/http:proxy-service-xf7wd:portname2/proxy/: bar (200; 4.955582ms) Apr 17 00:36:49.352: INFO: (14) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:1080/proxy/: test<... (200; 5.037852ms) Apr 17 00:36:49.352: INFO: (14) /api/v1/namespaces/proxy-2256/services/http:proxy-service-xf7wd:portname1/proxy/: foo (200; 5.158813ms) Apr 17 00:36:49.352: INFO: (14) /api/v1/namespaces/proxy-2256/services/proxy-service-xf7wd:portname1/proxy/: foo (200; 5.068031ms) Apr 17 00:36:49.352: INFO: (14) /api/v1/namespaces/proxy-2256/services/proxy-service-xf7wd:portname2/proxy/: bar (200; 5.098106ms) Apr 17 00:36:49.356: INFO: (15) /api/v1/namespaces/proxy-2256/pods/http:proxy-service-xf7wd-ztv76:162/proxy/: bar (200; 3.484924ms) Apr 17 00:36:49.356: INFO: (15) /api/v1/namespaces/proxy-2256/pods/http:proxy-service-xf7wd-ztv76:1080/proxy/: ... (200; 3.588918ms) Apr 17 00:36:49.356: INFO: (15) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:162/proxy/: bar (200; 3.658664ms) Apr 17 00:36:49.356: INFO: (15) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:160/proxy/: foo (200; 3.649757ms) Apr 17 00:36:49.356: INFO: (15) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:460/proxy/: tls baz (200; 3.736512ms) Apr 17 00:36:49.358: INFO: (15) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:462/proxy/: tls qux (200; 5.03702ms) Apr 17 00:36:49.358: INFO: (15) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:1080/proxy/: test<... (200; 5.011346ms) Apr 17 00:36:49.358: INFO: (15) /api/v1/namespaces/proxy-2256/pods/http:proxy-service-xf7wd-ztv76:160/proxy/: foo (200; 5.06372ms) Apr 17 00:36:49.358: INFO: (15) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:443/proxy/: test (200; 6.676718ms) Apr 17 00:36:49.359: INFO: (15) /api/v1/namespaces/proxy-2256/services/proxy-service-xf7wd:portname1/proxy/: foo (200; 6.782065ms) Apr 17 00:36:49.360: INFO: (15) /api/v1/namespaces/proxy-2256/services/http:proxy-service-xf7wd:portname2/proxy/: bar (200; 7.172732ms) Apr 17 00:36:49.360: INFO: (15) /api/v1/namespaces/proxy-2256/services/https:proxy-service-xf7wd:tlsportname1/proxy/: tls baz (200; 7.160036ms) Apr 17 00:36:49.360: INFO: (15) /api/v1/namespaces/proxy-2256/services/proxy-service-xf7wd:portname2/proxy/: bar (200; 7.138971ms) Apr 17 00:36:49.360: INFO: (15) /api/v1/namespaces/proxy-2256/services/https:proxy-service-xf7wd:tlsportname2/proxy/: tls qux (200; 7.184002ms) Apr 17 00:36:49.360: INFO: (15) /api/v1/namespaces/proxy-2256/services/http:proxy-service-xf7wd:portname1/proxy/: foo (200; 7.250371ms) Apr 17 00:36:49.363: INFO: (16) /api/v1/namespaces/proxy-2256/pods/http:proxy-service-xf7wd-ztv76:162/proxy/: bar (200; 2.778294ms) Apr 17 00:36:49.363: INFO: (16) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:160/proxy/: foo (200; 2.951954ms) Apr 17 00:36:49.363: INFO: (16) /api/v1/namespaces/proxy-2256/pods/http:proxy-service-xf7wd-ztv76:1080/proxy/: ... (200; 3.131985ms) Apr 17 00:36:49.363: INFO: (16) /api/v1/namespaces/proxy-2256/pods/http:proxy-service-xf7wd-ztv76:160/proxy/: foo (200; 3.210351ms) Apr 17 00:36:49.363: INFO: (16) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:162/proxy/: bar (200; 3.528658ms) Apr 17 00:36:49.363: INFO: (16) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76/proxy/: test (200; 3.533078ms) Apr 17 00:36:49.364: INFO: (16) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:462/proxy/: tls qux (200; 3.84665ms) Apr 17 00:36:49.364: INFO: (16) /api/v1/namespaces/proxy-2256/services/https:proxy-service-xf7wd:tlsportname2/proxy/: tls qux (200; 3.967499ms) Apr 17 00:36:49.364: INFO: (16) /api/v1/namespaces/proxy-2256/services/proxy-service-xf7wd:portname1/proxy/: foo (200; 3.902393ms) Apr 17 00:36:49.364: INFO: (16) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:1080/proxy/: test<... (200; 3.995292ms) Apr 17 00:36:49.364: INFO: (16) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:443/proxy/: ... (200; 2.515485ms) Apr 17 00:36:49.367: INFO: (17) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:462/proxy/: tls qux (200; 2.845706ms) Apr 17 00:36:49.367: INFO: (17) /api/v1/namespaces/proxy-2256/pods/http:proxy-service-xf7wd-ztv76:160/proxy/: foo (200; 2.70682ms) Apr 17 00:36:49.367: INFO: (17) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:443/proxy/: test (200; 4.531136ms) Apr 17 00:36:49.369: INFO: (17) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:1080/proxy/: test<... (200; 4.560114ms) Apr 17 00:36:49.369: INFO: (17) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:460/proxy/: tls baz (200; 4.60827ms) Apr 17 00:36:49.370: INFO: (17) /api/v1/namespaces/proxy-2256/services/https:proxy-service-xf7wd:tlsportname1/proxy/: tls baz (200; 5.31699ms) Apr 17 00:36:49.370: INFO: (17) /api/v1/namespaces/proxy-2256/services/proxy-service-xf7wd:portname1/proxy/: foo (200; 5.336306ms) Apr 17 00:36:49.370: INFO: (17) /api/v1/namespaces/proxy-2256/services/http:proxy-service-xf7wd:portname2/proxy/: bar (200; 5.382399ms) Apr 17 00:36:49.370: INFO: (17) /api/v1/namespaces/proxy-2256/services/proxy-service-xf7wd:portname2/proxy/: bar (200; 5.566181ms) Apr 17 00:36:49.370: INFO: (17) /api/v1/namespaces/proxy-2256/services/http:proxy-service-xf7wd:portname1/proxy/: foo (200; 5.562849ms) Apr 17 00:36:49.374: INFO: (18) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:1080/proxy/: test<... (200; 3.338235ms) Apr 17 00:36:49.374: INFO: (18) /api/v1/namespaces/proxy-2256/pods/http:proxy-service-xf7wd-ztv76:162/proxy/: bar (200; 3.312946ms) Apr 17 00:36:49.374: INFO: (18) /api/v1/namespaces/proxy-2256/pods/http:proxy-service-xf7wd-ztv76:160/proxy/: foo (200; 3.60574ms) Apr 17 00:36:49.374: INFO: (18) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:462/proxy/: tls qux (200; 3.712207ms) Apr 17 00:36:49.374: INFO: (18) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76/proxy/: test (200; 3.754736ms) Apr 17 00:36:49.374: INFO: (18) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:162/proxy/: bar (200; 3.952756ms) Apr 17 00:36:49.374: INFO: (18) /api/v1/namespaces/proxy-2256/services/http:proxy-service-xf7wd:portname2/proxy/: bar (200; 4.15208ms) Apr 17 00:36:49.374: INFO: (18) /api/v1/namespaces/proxy-2256/services/http:proxy-service-xf7wd:portname1/proxy/: foo (200; 4.072184ms) Apr 17 00:36:49.374: INFO: (18) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:160/proxy/: foo (200; 4.121957ms) Apr 17 00:36:49.375: INFO: (18) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:443/proxy/: ... (200; 4.485149ms) Apr 17 00:36:49.375: INFO: (18) /api/v1/namespaces/proxy-2256/services/https:proxy-service-xf7wd:tlsportname2/proxy/: tls qux (200; 4.502767ms) Apr 17 00:36:49.375: INFO: (18) /api/v1/namespaces/proxy-2256/services/proxy-service-xf7wd:portname2/proxy/: bar (200; 4.610515ms) Apr 17 00:36:49.378: INFO: (19) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:1080/proxy/: test<... (200; 2.786422ms) Apr 17 00:36:49.379: INFO: (19) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:443/proxy/: ... (200; 4.817385ms) Apr 17 00:36:49.380: INFO: (19) /api/v1/namespaces/proxy-2256/pods/http:proxy-service-xf7wd-ztv76:160/proxy/: foo (200; 4.876309ms) Apr 17 00:36:49.381: INFO: (19) /api/v1/namespaces/proxy-2256/pods/https:proxy-service-xf7wd-ztv76:460/proxy/: tls baz (200; 6.108148ms) Apr 17 00:36:49.383: INFO: (19) /api/v1/namespaces/proxy-2256/services/https:proxy-service-xf7wd:tlsportname1/proxy/: tls baz (200; 7.962957ms) Apr 17 00:36:49.384: INFO: (19) /api/v1/namespaces/proxy-2256/services/http:proxy-service-xf7wd:portname2/proxy/: bar (200; 8.481121ms) Apr 17 00:36:49.384: INFO: (19) /api/v1/namespaces/proxy-2256/pods/http:proxy-service-xf7wd-ztv76:162/proxy/: bar (200; 8.728236ms) Apr 17 00:36:49.384: INFO: (19) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76/proxy/: test (200; 8.710869ms) Apr 17 00:36:49.384: INFO: (19) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:160/proxy/: foo (200; 9.373945ms) Apr 17 00:36:49.385: INFO: (19) /api/v1/namespaces/proxy-2256/services/proxy-service-xf7wd:portname1/proxy/: foo (200; 9.547445ms) Apr 17 00:36:49.385: INFO: (19) /api/v1/namespaces/proxy-2256/pods/proxy-service-xf7wd-ztv76:162/proxy/: bar (200; 10.232284ms) Apr 17 00:36:49.386: INFO: (19) /api/v1/namespaces/proxy-2256/services/https:proxy-service-xf7wd:tlsportname2/proxy/: tls qux (200; 10.765674ms) Apr 17 00:36:49.394: INFO: (19) /api/v1/namespaces/proxy-2256/services/http:proxy-service-xf7wd:portname1/proxy/: foo (200; 18.829544ms) Apr 17 00:36:49.402: INFO: (19) /api/v1/namespaces/proxy-2256/services/proxy-service-xf7wd:portname2/proxy/: bar (200; 27.053824ms) STEP: deleting ReplicationController proxy-service-xf7wd in namespace proxy-2256, will wait for the garbage collector to delete the pods Apr 17 00:36:49.463: INFO: Deleting ReplicationController proxy-service-xf7wd took: 6.385648ms Apr 17 00:36:49.763: INFO: Terminating ReplicationController proxy-service-xf7wd pods took: 300.222938ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:37:03.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-2256" for this suite. • [SLOW TEST:22.034 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":275,"completed":207,"skipped":3689,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:37:03.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:37:14.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9278" for this suite. • [SLOW TEST:11.118 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":208,"skipped":3697,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:37:14.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-683361a6-203a-43b9-83ad-5279152101b7 STEP: Creating a pod to test consume configMaps Apr 17 00:37:14.344: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-06201d20-99b5-4038-a42f-2f477eedb603" in namespace "projected-5472" to be "Succeeded or Failed" Apr 17 00:37:14.348: INFO: Pod "pod-projected-configmaps-06201d20-99b5-4038-a42f-2f477eedb603": Phase="Pending", Reason="", readiness=false. Elapsed: 3.867393ms Apr 17 00:37:16.351: INFO: Pod "pod-projected-configmaps-06201d20-99b5-4038-a42f-2f477eedb603": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007732296s Apr 17 00:37:18.355: INFO: Pod "pod-projected-configmaps-06201d20-99b5-4038-a42f-2f477eedb603": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011723051s STEP: Saw pod success Apr 17 00:37:18.356: INFO: Pod "pod-projected-configmaps-06201d20-99b5-4038-a42f-2f477eedb603" satisfied condition "Succeeded or Failed" Apr 17 00:37:18.358: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-06201d20-99b5-4038-a42f-2f477eedb603 container projected-configmap-volume-test: STEP: delete the pod Apr 17 00:37:18.397: INFO: Waiting for pod pod-projected-configmaps-06201d20-99b5-4038-a42f-2f477eedb603 to disappear Apr 17 00:37:18.401: INFO: Pod pod-projected-configmaps-06201d20-99b5-4038-a42f-2f477eedb603 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:37:18.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5472" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":209,"skipped":3721,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:37:18.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service multi-endpoint-test in namespace services-6204 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6204 to expose endpoints map[] Apr 17 00:37:18.550: INFO: Get endpoints failed (17.083424ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Apr 17 00:37:19.553: INFO: successfully validated that service multi-endpoint-test in namespace services-6204 exposes endpoints map[] (1.020911303s elapsed) STEP: Creating pod pod1 in namespace services-6204 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6204 to expose endpoints map[pod1:[100]] Apr 17 00:37:23.695: INFO: successfully validated that service multi-endpoint-test in namespace services-6204 exposes endpoints map[pod1:[100]] (4.135295017s elapsed) STEP: Creating pod pod2 in namespace services-6204 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6204 to expose endpoints map[pod1:[100] pod2:[101]] Apr 17 00:37:26.745: INFO: successfully validated that service multi-endpoint-test in namespace services-6204 exposes endpoints map[pod1:[100] pod2:[101]] (3.045282921s elapsed) STEP: Deleting pod pod1 in namespace services-6204 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6204 to expose endpoints map[pod2:[101]] Apr 17 00:37:27.765: INFO: successfully validated that service multi-endpoint-test in namespace services-6204 exposes endpoints map[pod2:[101]] (1.015199414s elapsed) STEP: Deleting pod pod2 in namespace services-6204 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6204 to expose endpoints map[] Apr 17 00:37:28.781: INFO: successfully validated that service multi-endpoint-test in namespace services-6204 exposes endpoints map[] (1.011028901s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:37:28.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6204" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:10.470 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":275,"completed":210,"skipped":3730,"failed":0} SSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:37:28.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0417 00:37:40.266157 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 17 00:37:40.266: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:37:40.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2024" for this suite. • [SLOW TEST:11.394 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":211,"skipped":3734,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:37:40.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 00:37:40.315: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 17 00:37:42.230: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3034 create -f -' Apr 17 00:37:45.017: INFO: stderr: "" Apr 17 00:37:45.017: INFO: stdout: "e2e-test-crd-publish-openapi-1775-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 17 00:37:45.017: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3034 delete e2e-test-crd-publish-openapi-1775-crds test-cr' Apr 17 00:37:45.143: INFO: stderr: "" Apr 17 00:37:45.143: INFO: stdout: "e2e-test-crd-publish-openapi-1775-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Apr 17 00:37:45.143: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3034 apply -f -' Apr 17 00:37:45.410: INFO: stderr: "" Apr 17 00:37:45.410: INFO: stdout: "e2e-test-crd-publish-openapi-1775-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 17 00:37:45.410: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3034 delete e2e-test-crd-publish-openapi-1775-crds test-cr' Apr 17 00:37:45.532: INFO: stderr: "" Apr 17 00:37:45.532: INFO: stdout: "e2e-test-crd-publish-openapi-1775-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 17 00:37:45.533: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1775-crds' Apr 17 00:37:46.096: INFO: stderr: "" Apr 17 00:37:46.096: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1775-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:37:49.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3034" for this suite. • [SLOW TEST:8.748 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":212,"skipped":3743,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:37:49.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating api versions Apr 17 00:37:49.135: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config api-versions' Apr 17 00:37:49.336: INFO: stderr: "" Apr 17 00:37:49.336: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:37:49.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3507" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":275,"completed":213,"skipped":3773,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:37:49.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 00:37:49.388: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:37:53.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9537" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":214,"skipped":3795,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:37:53.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Apr 17 00:38:03.618: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-438 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 17 00:38:03.618: INFO: >>> kubeConfig: /root/.kube/config I0417 00:38:03.654582 7 log.go:172] (0xc005e53760) (0xc00085b9a0) Create stream I0417 00:38:03.654623 7 log.go:172] (0xc005e53760) (0xc00085b9a0) Stream added, broadcasting: 1 I0417 00:38:03.656633 7 log.go:172] (0xc005e53760) Reply frame received for 1 I0417 00:38:03.656681 7 log.go:172] (0xc005e53760) (0xc0023775e0) Create stream I0417 00:38:03.656694 7 log.go:172] (0xc005e53760) (0xc0023775e0) Stream added, broadcasting: 3 I0417 00:38:03.658620 7 log.go:172] (0xc005e53760) Reply frame received for 3 I0417 00:38:03.658657 7 log.go:172] (0xc005e53760) (0xc00085bae0) Create stream I0417 00:38:03.658666 7 log.go:172] (0xc005e53760) (0xc00085bae0) Stream added, broadcasting: 5 I0417 00:38:03.659608 7 log.go:172] (0xc005e53760) Reply frame received for 5 I0417 00:38:03.742934 7 log.go:172] (0xc005e53760) Data frame received for 5 I0417 00:38:03.742977 7 log.go:172] (0xc00085bae0) (5) Data frame handling I0417 00:38:03.743005 7 log.go:172] (0xc005e53760) Data frame received for 3 I0417 00:38:03.743018 7 log.go:172] (0xc0023775e0) (3) Data frame handling I0417 00:38:03.743034 7 log.go:172] (0xc0023775e0) (3) Data frame sent I0417 00:38:03.743048 7 log.go:172] (0xc005e53760) Data frame received for 3 I0417 00:38:03.743081 7 log.go:172] (0xc0023775e0) (3) Data frame handling I0417 00:38:03.744679 7 log.go:172] (0xc005e53760) Data frame received for 1 I0417 00:38:03.744719 7 log.go:172] (0xc00085b9a0) (1) Data frame handling I0417 00:38:03.744737 7 log.go:172] (0xc00085b9a0) (1) Data frame sent I0417 00:38:03.744769 7 log.go:172] (0xc005e53760) (0xc00085b9a0) Stream removed, broadcasting: 1 I0417 00:38:03.744788 7 log.go:172] (0xc005e53760) Go away received I0417 00:38:03.744905 7 log.go:172] (0xc005e53760) (0xc00085b9a0) Stream removed, broadcasting: 1 I0417 00:38:03.744947 7 log.go:172] (0xc005e53760) (0xc0023775e0) Stream removed, broadcasting: 3 I0417 00:38:03.744967 7 log.go:172] (0xc005e53760) (0xc00085bae0) Stream removed, broadcasting: 5 Apr 17 00:38:03.744: INFO: Exec stderr: "" Apr 17 00:38:03.745: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-438 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 17 00:38:03.745: INFO: >>> kubeConfig: /root/.kube/config I0417 00:38:03.774649 7 log.go:172] (0xc005e53d90) (0xc000c76820) Create stream I0417 00:38:03.774676 7 log.go:172] (0xc005e53d90) (0xc000c76820) Stream added, broadcasting: 1 I0417 00:38:03.776514 7 log.go:172] (0xc005e53d90) Reply frame received for 1 I0417 00:38:03.776570 7 log.go:172] (0xc005e53d90) (0xc002377720) Create stream I0417 00:38:03.776591 7 log.go:172] (0xc005e53d90) (0xc002377720) Stream added, broadcasting: 3 I0417 00:38:03.777955 7 log.go:172] (0xc005e53d90) Reply frame received for 3 I0417 00:38:03.777999 7 log.go:172] (0xc005e53d90) (0xc00282ed20) Create stream I0417 00:38:03.778011 7 log.go:172] (0xc005e53d90) (0xc00282ed20) Stream added, broadcasting: 5 I0417 00:38:03.778904 7 log.go:172] (0xc005e53d90) Reply frame received for 5 I0417 00:38:03.841858 7 log.go:172] (0xc005e53d90) Data frame received for 5 I0417 00:38:03.841902 7 log.go:172] (0xc00282ed20) (5) Data frame handling I0417 00:38:03.841934 7 log.go:172] (0xc005e53d90) Data frame received for 3 I0417 00:38:03.841949 7 log.go:172] (0xc002377720) (3) Data frame handling I0417 00:38:03.841993 7 log.go:172] (0xc002377720) (3) Data frame sent I0417 00:38:03.842013 7 log.go:172] (0xc005e53d90) Data frame received for 3 I0417 00:38:03.842026 7 log.go:172] (0xc002377720) (3) Data frame handling I0417 00:38:03.843570 7 log.go:172] (0xc005e53d90) Data frame received for 1 I0417 00:38:03.843595 7 log.go:172] (0xc000c76820) (1) Data frame handling I0417 00:38:03.843612 7 log.go:172] (0xc000c76820) (1) Data frame sent I0417 00:38:03.843622 7 log.go:172] (0xc005e53d90) (0xc000c76820) Stream removed, broadcasting: 1 I0417 00:38:03.843630 7 log.go:172] (0xc005e53d90) Go away received I0417 00:38:03.843908 7 log.go:172] (0xc005e53d90) (0xc000c76820) Stream removed, broadcasting: 1 I0417 00:38:03.843941 7 log.go:172] (0xc005e53d90) (0xc002377720) Stream removed, broadcasting: 3 I0417 00:38:03.843959 7 log.go:172] (0xc005e53d90) (0xc00282ed20) Stream removed, broadcasting: 5 Apr 17 00:38:03.843: INFO: Exec stderr: "" Apr 17 00:38:03.844: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-438 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 17 00:38:03.844: INFO: >>> kubeConfig: /root/.kube/config I0417 00:38:03.875469 7 log.go:172] (0xc006916630) (0xc0023779a0) Create stream I0417 00:38:03.875495 7 log.go:172] (0xc006916630) (0xc0023779a0) Stream added, broadcasting: 1 I0417 00:38:03.881562 7 log.go:172] (0xc006916630) Reply frame received for 1 I0417 00:38:03.881618 7 log.go:172] (0xc006916630) (0xc0011a2140) Create stream I0417 00:38:03.881658 7 log.go:172] (0xc006916630) (0xc0011a2140) Stream added, broadcasting: 3 I0417 00:38:03.884642 7 log.go:172] (0xc006916630) Reply frame received for 3 I0417 00:38:03.884778 7 log.go:172] (0xc006916630) (0xc000508140) Create stream I0417 00:38:03.885077 7 log.go:172] (0xc006916630) (0xc000508140) Stream added, broadcasting: 5 I0417 00:38:03.886433 7 log.go:172] (0xc006916630) Reply frame received for 5 I0417 00:38:03.936274 7 log.go:172] (0xc006916630) Data frame received for 3 I0417 00:38:03.936338 7 log.go:172] (0xc0011a2140) (3) Data frame handling I0417 00:38:03.936365 7 log.go:172] (0xc0011a2140) (3) Data frame sent I0417 00:38:03.936388 7 log.go:172] (0xc006916630) Data frame received for 3 I0417 00:38:03.936427 7 log.go:172] (0xc006916630) Data frame received for 5 I0417 00:38:03.936487 7 log.go:172] (0xc000508140) (5) Data frame handling I0417 00:38:03.936527 7 log.go:172] (0xc0011a2140) (3) Data frame handling I0417 00:38:03.938280 7 log.go:172] (0xc006916630) Data frame received for 1 I0417 00:38:03.938305 7 log.go:172] (0xc0023779a0) (1) Data frame handling I0417 00:38:03.938324 7 log.go:172] (0xc0023779a0) (1) Data frame sent I0417 00:38:03.938347 7 log.go:172] (0xc006916630) (0xc0023779a0) Stream removed, broadcasting: 1 I0417 00:38:03.938474 7 log.go:172] (0xc006916630) Go away received I0417 00:38:03.938526 7 log.go:172] (0xc006916630) (0xc0023779a0) Stream removed, broadcasting: 1 I0417 00:38:03.938556 7 log.go:172] (0xc006916630) (0xc0011a2140) Stream removed, broadcasting: 3 I0417 00:38:03.938573 7 log.go:172] (0xc006916630) (0xc000508140) Stream removed, broadcasting: 5 Apr 17 00:38:03.938: INFO: Exec stderr: "" Apr 17 00:38:03.938: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-438 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 17 00:38:03.938: INFO: >>> kubeConfig: /root/.kube/config I0417 00:38:03.976146 7 log.go:172] (0xc0027ef340) (0xc00196e8c0) Create stream I0417 00:38:03.976168 7 log.go:172] (0xc0027ef340) (0xc00196e8c0) Stream added, broadcasting: 1 I0417 00:38:03.977934 7 log.go:172] (0xc0027ef340) Reply frame received for 1 I0417 00:38:03.977974 7 log.go:172] (0xc0027ef340) (0xc00085a0a0) Create stream I0417 00:38:03.977989 7 log.go:172] (0xc0027ef340) (0xc00085a0a0) Stream added, broadcasting: 3 I0417 00:38:03.978877 7 log.go:172] (0xc0027ef340) Reply frame received for 3 I0417 00:38:03.978909 7 log.go:172] (0xc0027ef340) (0xc00196eb40) Create stream I0417 00:38:03.978921 7 log.go:172] (0xc0027ef340) (0xc00196eb40) Stream added, broadcasting: 5 I0417 00:38:03.979919 7 log.go:172] (0xc0027ef340) Reply frame received for 5 I0417 00:38:04.042631 7 log.go:172] (0xc0027ef340) Data frame received for 5 I0417 00:38:04.042673 7 log.go:172] (0xc0027ef340) Data frame received for 3 I0417 00:38:04.042717 7 log.go:172] (0xc00085a0a0) (3) Data frame handling I0417 00:38:04.042730 7 log.go:172] (0xc00085a0a0) (3) Data frame sent I0417 00:38:04.042745 7 log.go:172] (0xc0027ef340) Data frame received for 3 I0417 00:38:04.042757 7 log.go:172] (0xc00085a0a0) (3) Data frame handling I0417 00:38:04.042791 7 log.go:172] (0xc00196eb40) (5) Data frame handling I0417 00:38:04.043914 7 log.go:172] (0xc0027ef340) Data frame received for 1 I0417 00:38:04.043933 7 log.go:172] (0xc00196e8c0) (1) Data frame handling I0417 00:38:04.043943 7 log.go:172] (0xc00196e8c0) (1) Data frame sent I0417 00:38:04.043968 7 log.go:172] (0xc0027ef340) (0xc00196e8c0) Stream removed, broadcasting: 1 I0417 00:38:04.044039 7 log.go:172] (0xc0027ef340) Go away received I0417 00:38:04.044153 7 log.go:172] (0xc0027ef340) (0xc00196e8c0) Stream removed, broadcasting: 1 I0417 00:38:04.044179 7 log.go:172] (0xc0027ef340) (0xc00085a0a0) Stream removed, broadcasting: 3 I0417 00:38:04.044194 7 log.go:172] (0xc0027ef340) (0xc00196eb40) Stream removed, broadcasting: 5 Apr 17 00:38:04.044: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Apr 17 00:38:04.044: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-438 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 17 00:38:04.044: INFO: >>> kubeConfig: /root/.kube/config I0417 00:38:04.069309 7 log.go:172] (0xc0027ef600) (0xc00196f220) Create stream I0417 00:38:04.069333 7 log.go:172] (0xc0027ef600) (0xc00196f220) Stream added, broadcasting: 1 I0417 00:38:04.071121 7 log.go:172] (0xc0027ef600) Reply frame received for 1 I0417 00:38:04.071167 7 log.go:172] (0xc0027ef600) (0xc0011a2640) Create stream I0417 00:38:04.071182 7 log.go:172] (0xc0027ef600) (0xc0011a2640) Stream added, broadcasting: 3 I0417 00:38:04.071938 7 log.go:172] (0xc0027ef600) Reply frame received for 3 I0417 00:38:04.071994 7 log.go:172] (0xc0027ef600) (0xc0014fcaa0) Create stream I0417 00:38:04.072017 7 log.go:172] (0xc0027ef600) (0xc0014fcaa0) Stream added, broadcasting: 5 I0417 00:38:04.072845 7 log.go:172] (0xc0027ef600) Reply frame received for 5 I0417 00:38:04.130683 7 log.go:172] (0xc0027ef600) Data frame received for 5 I0417 00:38:04.130728 7 log.go:172] (0xc0014fcaa0) (5) Data frame handling I0417 00:38:04.130755 7 log.go:172] (0xc0027ef600) Data frame received for 3 I0417 00:38:04.130769 7 log.go:172] (0xc0011a2640) (3) Data frame handling I0417 00:38:04.130789 7 log.go:172] (0xc0011a2640) (3) Data frame sent I0417 00:38:04.130802 7 log.go:172] (0xc0027ef600) Data frame received for 3 I0417 00:38:04.130812 7 log.go:172] (0xc0011a2640) (3) Data frame handling I0417 00:38:04.132197 7 log.go:172] (0xc0027ef600) Data frame received for 1 I0417 00:38:04.132219 7 log.go:172] (0xc00196f220) (1) Data frame handling I0417 00:38:04.132227 7 log.go:172] (0xc00196f220) (1) Data frame sent I0417 00:38:04.132237 7 log.go:172] (0xc0027ef600) (0xc00196f220) Stream removed, broadcasting: 1 I0417 00:38:04.132294 7 log.go:172] (0xc0027ef600) (0xc00196f220) Stream removed, broadcasting: 1 I0417 00:38:04.132303 7 log.go:172] (0xc0027ef600) (0xc0011a2640) Stream removed, broadcasting: 3 I0417 00:38:04.132328 7 log.go:172] (0xc0027ef600) Go away received I0417 00:38:04.132405 7 log.go:172] (0xc0027ef600) (0xc0014fcaa0) Stream removed, broadcasting: 5 Apr 17 00:38:04.132: INFO: Exec stderr: "" Apr 17 00:38:04.132: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-438 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 17 00:38:04.132: INFO: >>> kubeConfig: /root/.kube/config I0417 00:38:04.167316 7 log.go:172] (0xc002c6c160) (0xc0011a2e60) Create stream I0417 00:38:04.167348 7 log.go:172] (0xc002c6c160) (0xc0011a2e60) Stream added, broadcasting: 1 I0417 00:38:04.170298 7 log.go:172] (0xc002c6c160) Reply frame received for 1 I0417 00:38:04.170360 7 log.go:172] (0xc002c6c160) (0xc001e646e0) Create stream I0417 00:38:04.170388 7 log.go:172] (0xc002c6c160) (0xc001e646e0) Stream added, broadcasting: 3 I0417 00:38:04.171533 7 log.go:172] (0xc002c6c160) Reply frame received for 3 I0417 00:38:04.171578 7 log.go:172] (0xc002c6c160) (0xc0014fd360) Create stream I0417 00:38:04.171592 7 log.go:172] (0xc002c6c160) (0xc0014fd360) Stream added, broadcasting: 5 I0417 00:38:04.172602 7 log.go:172] (0xc002c6c160) Reply frame received for 5 I0417 00:38:04.233712 7 log.go:172] (0xc002c6c160) Data frame received for 3 I0417 00:38:04.233758 7 log.go:172] (0xc001e646e0) (3) Data frame handling I0417 00:38:04.233774 7 log.go:172] (0xc001e646e0) (3) Data frame sent I0417 00:38:04.233787 7 log.go:172] (0xc002c6c160) Data frame received for 3 I0417 00:38:04.233797 7 log.go:172] (0xc001e646e0) (3) Data frame handling I0417 00:38:04.233824 7 log.go:172] (0xc002c6c160) Data frame received for 5 I0417 00:38:04.233845 7 log.go:172] (0xc0014fd360) (5) Data frame handling I0417 00:38:04.235848 7 log.go:172] (0xc002c6c160) Data frame received for 1 I0417 00:38:04.235881 7 log.go:172] (0xc0011a2e60) (1) Data frame handling I0417 00:38:04.235907 7 log.go:172] (0xc0011a2e60) (1) Data frame sent I0417 00:38:04.235933 7 log.go:172] (0xc002c6c160) (0xc0011a2e60) Stream removed, broadcasting: 1 I0417 00:38:04.235996 7 log.go:172] (0xc002c6c160) Go away received I0417 00:38:04.236070 7 log.go:172] (0xc002c6c160) (0xc0011a2e60) Stream removed, broadcasting: 1 I0417 00:38:04.236105 7 log.go:172] (0xc002c6c160) (0xc001e646e0) Stream removed, broadcasting: 3 I0417 00:38:04.236131 7 log.go:172] (0xc002c6c160) (0xc0014fd360) Stream removed, broadcasting: 5 Apr 17 00:38:04.236: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Apr 17 00:38:04.236: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-438 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 17 00:38:04.236: INFO: >>> kubeConfig: /root/.kube/config I0417 00:38:04.264372 7 log.go:172] (0xc002d0a580) (0xc001e64be0) Create stream I0417 00:38:04.264395 7 log.go:172] (0xc002d0a580) (0xc001e64be0) Stream added, broadcasting: 1 I0417 00:38:04.266868 7 log.go:172] (0xc002d0a580) Reply frame received for 1 I0417 00:38:04.266913 7 log.go:172] (0xc002d0a580) (0xc00196f540) Create stream I0417 00:38:04.266928 7 log.go:172] (0xc002d0a580) (0xc00196f540) Stream added, broadcasting: 3 I0417 00:38:04.267905 7 log.go:172] (0xc002d0a580) Reply frame received for 3 I0417 00:38:04.267927 7 log.go:172] (0xc002d0a580) (0xc00085a460) Create stream I0417 00:38:04.267935 7 log.go:172] (0xc002d0a580) (0xc00085a460) Stream added, broadcasting: 5 I0417 00:38:04.268808 7 log.go:172] (0xc002d0a580) Reply frame received for 5 I0417 00:38:04.323713 7 log.go:172] (0xc002d0a580) Data frame received for 3 I0417 00:38:04.323734 7 log.go:172] (0xc00196f540) (3) Data frame handling I0417 00:38:04.323742 7 log.go:172] (0xc00196f540) (3) Data frame sent I0417 00:38:04.323749 7 log.go:172] (0xc002d0a580) Data frame received for 3 I0417 00:38:04.323755 7 log.go:172] (0xc00196f540) (3) Data frame handling I0417 00:38:04.324028 7 log.go:172] (0xc002d0a580) Data frame received for 5 I0417 00:38:04.324063 7 log.go:172] (0xc00085a460) (5) Data frame handling I0417 00:38:04.326153 7 log.go:172] (0xc002d0a580) Data frame received for 1 I0417 00:38:04.326173 7 log.go:172] (0xc001e64be0) (1) Data frame handling I0417 00:38:04.326197 7 log.go:172] (0xc001e64be0) (1) Data frame sent I0417 00:38:04.326211 7 log.go:172] (0xc002d0a580) (0xc001e64be0) Stream removed, broadcasting: 1 I0417 00:38:04.326229 7 log.go:172] (0xc002d0a580) Go away received I0417 00:38:04.326466 7 log.go:172] (0xc002d0a580) (0xc001e64be0) Stream removed, broadcasting: 1 I0417 00:38:04.326503 7 log.go:172] (0xc002d0a580) (0xc00196f540) Stream removed, broadcasting: 3 I0417 00:38:04.326540 7 log.go:172] (0xc002d0a580) (0xc00085a460) Stream removed, broadcasting: 5 Apr 17 00:38:04.326: INFO: Exec stderr: "" Apr 17 00:38:04.326: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-438 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 17 00:38:04.326: INFO: >>> kubeConfig: /root/.kube/config I0417 00:38:04.357661 7 log.go:172] (0xc002c6c840) (0xc0011a35e0) Create stream I0417 00:38:04.357697 7 log.go:172] (0xc002c6c840) (0xc0011a35e0) Stream added, broadcasting: 1 I0417 00:38:04.359901 7 log.go:172] (0xc002c6c840) Reply frame received for 1 I0417 00:38:04.359958 7 log.go:172] (0xc002c6c840) (0xc0011a3e00) Create stream I0417 00:38:04.359975 7 log.go:172] (0xc002c6c840) (0xc0011a3e00) Stream added, broadcasting: 3 I0417 00:38:04.360832 7 log.go:172] (0xc002c6c840) Reply frame received for 3 I0417 00:38:04.360868 7 log.go:172] (0xc002c6c840) (0xc001e64c80) Create stream I0417 00:38:04.360885 7 log.go:172] (0xc002c6c840) (0xc001e64c80) Stream added, broadcasting: 5 I0417 00:38:04.361973 7 log.go:172] (0xc002c6c840) Reply frame received for 5 I0417 00:38:04.436924 7 log.go:172] (0xc002c6c840) Data frame received for 3 I0417 00:38:04.436957 7 log.go:172] (0xc0011a3e00) (3) Data frame handling I0417 00:38:04.436970 7 log.go:172] (0xc0011a3e00) (3) Data frame sent I0417 00:38:04.436979 7 log.go:172] (0xc002c6c840) Data frame received for 3 I0417 00:38:04.436988 7 log.go:172] (0xc0011a3e00) (3) Data frame handling I0417 00:38:04.437040 7 log.go:172] (0xc002c6c840) Data frame received for 5 I0417 00:38:04.437074 7 log.go:172] (0xc001e64c80) (5) Data frame handling I0417 00:38:04.438330 7 log.go:172] (0xc002c6c840) Data frame received for 1 I0417 00:38:04.438370 7 log.go:172] (0xc0011a35e0) (1) Data frame handling I0417 00:38:04.438397 7 log.go:172] (0xc0011a35e0) (1) Data frame sent I0417 00:38:04.438418 7 log.go:172] (0xc002c6c840) (0xc0011a35e0) Stream removed, broadcasting: 1 I0417 00:38:04.438442 7 log.go:172] (0xc002c6c840) Go away received I0417 00:38:04.438600 7 log.go:172] (0xc002c6c840) (0xc0011a35e0) Stream removed, broadcasting: 1 I0417 00:38:04.438632 7 log.go:172] (0xc002c6c840) (0xc0011a3e00) Stream removed, broadcasting: 3 I0417 00:38:04.438661 7 log.go:172] (0xc002c6c840) (0xc001e64c80) Stream removed, broadcasting: 5 Apr 17 00:38:04.438: INFO: Exec stderr: "" Apr 17 00:38:04.438: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-438 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 17 00:38:04.438: INFO: >>> kubeConfig: /root/.kube/config I0417 00:38:04.479011 7 log.go:172] (0xc003080370) (0xc00085b180) Create stream I0417 00:38:04.479039 7 log.go:172] (0xc003080370) (0xc00085b180) Stream added, broadcasting: 1 I0417 00:38:04.481466 7 log.go:172] (0xc003080370) Reply frame received for 1 I0417 00:38:04.481510 7 log.go:172] (0xc003080370) (0xc000508780) Create stream I0417 00:38:04.481531 7 log.go:172] (0xc003080370) (0xc000508780) Stream added, broadcasting: 3 I0417 00:38:04.482578 7 log.go:172] (0xc003080370) Reply frame received for 3 I0417 00:38:04.482629 7 log.go:172] (0xc003080370) (0xc00085b220) Create stream I0417 00:38:04.482645 7 log.go:172] (0xc003080370) (0xc00085b220) Stream added, broadcasting: 5 I0417 00:38:04.483539 7 log.go:172] (0xc003080370) Reply frame received for 5 I0417 00:38:04.539350 7 log.go:172] (0xc003080370) Data frame received for 5 I0417 00:38:04.539379 7 log.go:172] (0xc00085b220) (5) Data frame handling I0417 00:38:04.539413 7 log.go:172] (0xc003080370) Data frame received for 3 I0417 00:38:04.539485 7 log.go:172] (0xc000508780) (3) Data frame handling I0417 00:38:04.539534 7 log.go:172] (0xc000508780) (3) Data frame sent I0417 00:38:04.539552 7 log.go:172] (0xc003080370) Data frame received for 3 I0417 00:38:04.539562 7 log.go:172] (0xc000508780) (3) Data frame handling I0417 00:38:04.540973 7 log.go:172] (0xc003080370) Data frame received for 1 I0417 00:38:04.540988 7 log.go:172] (0xc00085b180) (1) Data frame handling I0417 00:38:04.541002 7 log.go:172] (0xc00085b180) (1) Data frame sent I0417 00:38:04.541307 7 log.go:172] (0xc003080370) (0xc00085b180) Stream removed, broadcasting: 1 I0417 00:38:04.541373 7 log.go:172] (0xc003080370) Go away received I0417 00:38:04.541450 7 log.go:172] (0xc003080370) (0xc00085b180) Stream removed, broadcasting: 1 I0417 00:38:04.541466 7 log.go:172] (0xc003080370) (0xc000508780) Stream removed, broadcasting: 3 I0417 00:38:04.541478 7 log.go:172] (0xc003080370) (0xc00085b220) Stream removed, broadcasting: 5 Apr 17 00:38:04.541: INFO: Exec stderr: "" Apr 17 00:38:04.541: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-438 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 17 00:38:04.541: INFO: >>> kubeConfig: /root/.kube/config I0417 00:38:04.575891 7 log.go:172] (0xc003080630) (0xc00085b360) Create stream I0417 00:38:04.575918 7 log.go:172] (0xc003080630) (0xc00085b360) Stream added, broadcasting: 1 I0417 00:38:04.578304 7 log.go:172] (0xc003080630) Reply frame received for 1 I0417 00:38:04.578333 7 log.go:172] (0xc003080630) (0xc00085b4a0) Create stream I0417 00:38:04.578342 7 log.go:172] (0xc003080630) (0xc00085b4a0) Stream added, broadcasting: 3 I0417 00:38:04.579442 7 log.go:172] (0xc003080630) Reply frame received for 3 I0417 00:38:04.579500 7 log.go:172] (0xc003080630) (0xc00085b720) Create stream I0417 00:38:04.579519 7 log.go:172] (0xc003080630) (0xc00085b720) Stream added, broadcasting: 5 I0417 00:38:04.580527 7 log.go:172] (0xc003080630) Reply frame received for 5 I0417 00:38:04.644080 7 log.go:172] (0xc003080630) Data frame received for 3 I0417 00:38:04.644111 7 log.go:172] (0xc00085b4a0) (3) Data frame handling I0417 00:38:04.644120 7 log.go:172] (0xc00085b4a0) (3) Data frame sent I0417 00:38:04.644125 7 log.go:172] (0xc003080630) Data frame received for 3 I0417 00:38:04.644129 7 log.go:172] (0xc00085b4a0) (3) Data frame handling I0417 00:38:04.644206 7 log.go:172] (0xc003080630) Data frame received for 5 I0417 00:38:04.644222 7 log.go:172] (0xc00085b720) (5) Data frame handling I0417 00:38:04.646121 7 log.go:172] (0xc003080630) Data frame received for 1 I0417 00:38:04.646155 7 log.go:172] (0xc00085b360) (1) Data frame handling I0417 00:38:04.646196 7 log.go:172] (0xc00085b360) (1) Data frame sent I0417 00:38:04.646225 7 log.go:172] (0xc003080630) (0xc00085b360) Stream removed, broadcasting: 1 I0417 00:38:04.646331 7 log.go:172] (0xc003080630) Go away received I0417 00:38:04.646372 7 log.go:172] (0xc003080630) (0xc00085b360) Stream removed, broadcasting: 1 I0417 00:38:04.646395 7 log.go:172] (0xc003080630) (0xc00085b4a0) Stream removed, broadcasting: 3 I0417 00:38:04.646402 7 log.go:172] (0xc003080630) (0xc00085b720) Stream removed, broadcasting: 5 Apr 17 00:38:04.646: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:38:04.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-438" for this suite. • [SLOW TEST:11.210 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":215,"skipped":3826,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:38:04.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 00:38:04.714: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 17 00:38:09.720: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 17 00:38:09.721: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 17 00:38:11.724: INFO: Creating deployment "test-rollover-deployment" Apr 17 00:38:11.749: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 17 00:38:13.756: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 17 00:38:13.761: INFO: Ensure that both replica sets have 1 created replica Apr 17 00:38:13.767: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 17 00:38:13.773: INFO: Updating deployment test-rollover-deployment Apr 17 00:38:13.773: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 17 00:38:15.802: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 17 00:38:15.811: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 17 00:38:15.816: INFO: all replica sets need to contain the pod-template-hash label Apr 17 00:38:15.816: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722680691, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722680691, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722680693, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722680691, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 17 00:38:17.822: INFO: all replica sets need to contain the pod-template-hash label Apr 17 00:38:17.822: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722680691, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722680691, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722680697, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722680691, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 17 00:38:19.821: INFO: all replica sets need to contain the pod-template-hash label Apr 17 00:38:19.821: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722680691, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722680691, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722680697, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722680691, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 17 00:38:21.824: INFO: all replica sets need to contain the pod-template-hash label Apr 17 00:38:21.824: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722680691, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722680691, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722680697, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722680691, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 17 00:38:23.823: INFO: all replica sets need to contain the pod-template-hash label Apr 17 00:38:23.823: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722680691, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722680691, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722680697, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722680691, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 17 00:38:25.823: INFO: all replica sets need to contain the pod-template-hash label Apr 17 00:38:25.823: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722680691, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722680691, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722680697, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722680691, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 17 00:38:27.825: INFO: Apr 17 00:38:27.825: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 17 00:38:27.832: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-1869 /apis/apps/v1/namespaces/deployment-1869/deployments/test-rollover-deployment a5ab9eaa-8444-4001-b5cf-fd789d5294c5 8677226 2 2020-04-17 00:38:11 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005fbbdd8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-17 00:38:11 +0000 UTC,LastTransitionTime:2020-04-17 00:38:11 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-78df7bc796" has successfully progressed.,LastUpdateTime:2020-04-17 00:38:27 +0000 UTC,LastTransitionTime:2020-04-17 00:38:11 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 17 00:38:27.835: INFO: New ReplicaSet "test-rollover-deployment-78df7bc796" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-78df7bc796 deployment-1869 /apis/apps/v1/namespaces/deployment-1869/replicasets/test-rollover-deployment-78df7bc796 fb9cb0d8-ed54-41d6-99ce-76bd2ce719fd 8677215 2 2020-04-17 00:38:13 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment a5ab9eaa-8444-4001-b5cf-fd789d5294c5 0xc0022782c7 0xc0022782c8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78df7bc796,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002278338 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 17 00:38:27.835: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 17 00:38:27.835: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-1869 /apis/apps/v1/namespaces/deployment-1869/replicasets/test-rollover-controller 3b2461cd-2df4-4751-a945-589654e2760b 8677225 2 2020-04-17 00:38:04 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment a5ab9eaa-8444-4001-b5cf-fd789d5294c5 0xc0022781df 0xc0022781f0}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002278258 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 17 00:38:27.835: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-1869 /apis/apps/v1/namespaces/deployment-1869/replicasets/test-rollover-deployment-f6c94f66c 25b2803c-63e9-4f05-8f34-dc14943a19ae 8677167 2 2020-04-17 00:38:11 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment a5ab9eaa-8444-4001-b5cf-fd789d5294c5 0xc0022783a0 0xc0022783a1}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002278418 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 17 00:38:27.837: INFO: Pod "test-rollover-deployment-78df7bc796-h99wk" is available: &Pod{ObjectMeta:{test-rollover-deployment-78df7bc796-h99wk test-rollover-deployment-78df7bc796- deployment-1869 /api/v1/namespaces/deployment-1869/pods/test-rollover-deployment-78df7bc796-h99wk 5e56e764-79d2-427c-b6f2-80b944d8299f 8677182 0 2020-04-17 00:38:13 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [{apps/v1 ReplicaSet test-rollover-deployment-78df7bc796 fb9cb0d8-ed54-41d6-99ce-76bd2ce719fd 0xc0022789c7 0xc0022789c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kgwtx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kgwtx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kgwtx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:38:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:38:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:38:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:38:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.63,StartTime:2020-04-17 00:38:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-17 00:38:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://515a32595430668d1bdaadf1e83ae3b4797f64c430446858e94618becdd73d01,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.63,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:38:27.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1869" for this suite. • [SLOW TEST:23.189 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":216,"skipped":3838,"failed":0} SSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:38:27.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override arguments Apr 17 00:38:27.986: INFO: Waiting up to 5m0s for pod "client-containers-dafaf5cf-7468-4be2-bca5-2a841cea28e7" in namespace "containers-5338" to be "Succeeded or Failed" Apr 17 00:38:28.030: INFO: Pod "client-containers-dafaf5cf-7468-4be2-bca5-2a841cea28e7": Phase="Pending", Reason="", readiness=false. Elapsed: 43.090047ms Apr 17 00:38:30.033: INFO: Pod "client-containers-dafaf5cf-7468-4be2-bca5-2a841cea28e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046947054s Apr 17 00:38:32.038: INFO: Pod "client-containers-dafaf5cf-7468-4be2-bca5-2a841cea28e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051290164s STEP: Saw pod success Apr 17 00:38:32.038: INFO: Pod "client-containers-dafaf5cf-7468-4be2-bca5-2a841cea28e7" satisfied condition "Succeeded or Failed" Apr 17 00:38:32.041: INFO: Trying to get logs from node latest-worker2 pod client-containers-dafaf5cf-7468-4be2-bca5-2a841cea28e7 container test-container: STEP: delete the pod Apr 17 00:38:32.064: INFO: Waiting for pod client-containers-dafaf5cf-7468-4be2-bca5-2a841cea28e7 to disappear Apr 17 00:38:32.080: INFO: Pod client-containers-dafaf5cf-7468-4be2-bca5-2a841cea28e7 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:38:32.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5338" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":217,"skipped":3843,"failed":0} SSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:38:32.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 17 00:38:32.227: INFO: Waiting up to 5m0s for pod "downward-api-55e93044-7d5c-4b62-a78c-a2a7b79a2209" in namespace "downward-api-1533" to be "Succeeded or Failed" Apr 17 00:38:32.239: INFO: Pod "downward-api-55e93044-7d5c-4b62-a78c-a2a7b79a2209": Phase="Pending", Reason="", readiness=false. Elapsed: 12.655284ms Apr 17 00:38:34.282: INFO: Pod "downward-api-55e93044-7d5c-4b62-a78c-a2a7b79a2209": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055110428s Apr 17 00:38:36.284: INFO: Pod "downward-api-55e93044-7d5c-4b62-a78c-a2a7b79a2209": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057737908s STEP: Saw pod success Apr 17 00:38:36.284: INFO: Pod "downward-api-55e93044-7d5c-4b62-a78c-a2a7b79a2209" satisfied condition "Succeeded or Failed" Apr 17 00:38:36.287: INFO: Trying to get logs from node latest-worker2 pod downward-api-55e93044-7d5c-4b62-a78c-a2a7b79a2209 container dapi-container: STEP: delete the pod Apr 17 00:38:36.433: INFO: Waiting for pod downward-api-55e93044-7d5c-4b62-a78c-a2a7b79a2209 to disappear Apr 17 00:38:36.461: INFO: Pod downward-api-55e93044-7d5c-4b62-a78c-a2a7b79a2209 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:38:36.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1533" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":218,"skipped":3846,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:38:36.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 17 00:38:36.608: INFO: Waiting up to 5m0s for pod "pod-08b5b46c-eb4c-48e2-8286-5cd3de7e48b7" in namespace "emptydir-7586" to be "Succeeded or Failed" Apr 17 00:38:36.636: INFO: Pod "pod-08b5b46c-eb4c-48e2-8286-5cd3de7e48b7": Phase="Pending", Reason="", readiness=false. Elapsed: 28.458524ms Apr 17 00:38:38.640: INFO: Pod "pod-08b5b46c-eb4c-48e2-8286-5cd3de7e48b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032453239s Apr 17 00:38:40.643: INFO: Pod "pod-08b5b46c-eb4c-48e2-8286-5cd3de7e48b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035876354s STEP: Saw pod success Apr 17 00:38:40.644: INFO: Pod "pod-08b5b46c-eb4c-48e2-8286-5cd3de7e48b7" satisfied condition "Succeeded or Failed" Apr 17 00:38:40.646: INFO: Trying to get logs from node latest-worker2 pod pod-08b5b46c-eb4c-48e2-8286-5cd3de7e48b7 container test-container: STEP: delete the pod Apr 17 00:38:40.679: INFO: Waiting for pod pod-08b5b46c-eb4c-48e2-8286-5cd3de7e48b7 to disappear Apr 17 00:38:40.725: INFO: Pod pod-08b5b46c-eb4c-48e2-8286-5cd3de7e48b7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:38:40.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7586" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":219,"skipped":3859,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:38:40.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-8ff6c175-4d34-492b-b2a9-e37ed4a45e4d STEP: Creating a pod to test consume configMaps Apr 17 00:38:41.078: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-048db2a6-6b95-4644-9bbb-19db4eeb12e3" in namespace "projected-8143" to be "Succeeded or Failed" Apr 17 00:38:41.111: INFO: Pod "pod-projected-configmaps-048db2a6-6b95-4644-9bbb-19db4eeb12e3": Phase="Pending", Reason="", readiness=false. Elapsed: 32.493242ms Apr 17 00:38:43.117: INFO: Pod "pod-projected-configmaps-048db2a6-6b95-4644-9bbb-19db4eeb12e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039084768s Apr 17 00:38:45.122: INFO: Pod "pod-projected-configmaps-048db2a6-6b95-4644-9bbb-19db4eeb12e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043458626s STEP: Saw pod success Apr 17 00:38:45.122: INFO: Pod "pod-projected-configmaps-048db2a6-6b95-4644-9bbb-19db4eeb12e3" satisfied condition "Succeeded or Failed" Apr 17 00:38:45.125: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-048db2a6-6b95-4644-9bbb-19db4eeb12e3 container projected-configmap-volume-test: STEP: delete the pod Apr 17 00:38:45.143: INFO: Waiting for pod pod-projected-configmaps-048db2a6-6b95-4644-9bbb-19db4eeb12e3 to disappear Apr 17 00:38:45.161: INFO: Pod pod-projected-configmaps-048db2a6-6b95-4644-9bbb-19db4eeb12e3 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:38:45.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8143" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":220,"skipped":3918,"failed":0} SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:38:45.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-downwardapi-l98v STEP: Creating a pod to test atomic-volume-subpath Apr 17 00:38:45.251: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-l98v" in namespace "subpath-3161" to be "Succeeded or Failed" Apr 17 00:38:45.255: INFO: Pod "pod-subpath-test-downwardapi-l98v": Phase="Pending", Reason="", readiness=false. Elapsed: 3.809548ms Apr 17 00:38:47.259: INFO: Pod "pod-subpath-test-downwardapi-l98v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007974252s Apr 17 00:38:49.263: INFO: Pod "pod-subpath-test-downwardapi-l98v": Phase="Running", Reason="", readiness=true. Elapsed: 4.011951471s Apr 17 00:38:51.267: INFO: Pod "pod-subpath-test-downwardapi-l98v": Phase="Running", Reason="", readiness=true. Elapsed: 6.016361681s Apr 17 00:38:53.272: INFO: Pod "pod-subpath-test-downwardapi-l98v": Phase="Running", Reason="", readiness=true. Elapsed: 8.020776697s Apr 17 00:38:55.276: INFO: Pod "pod-subpath-test-downwardapi-l98v": Phase="Running", Reason="", readiness=true. Elapsed: 10.02531566s Apr 17 00:38:57.280: INFO: Pod "pod-subpath-test-downwardapi-l98v": Phase="Running", Reason="", readiness=true. Elapsed: 12.029586694s Apr 17 00:38:59.285: INFO: Pod "pod-subpath-test-downwardapi-l98v": Phase="Running", Reason="", readiness=true. Elapsed: 14.034136766s Apr 17 00:39:01.289: INFO: Pod "pod-subpath-test-downwardapi-l98v": Phase="Running", Reason="", readiness=true. Elapsed: 16.038686157s Apr 17 00:39:03.294: INFO: Pod "pod-subpath-test-downwardapi-l98v": Phase="Running", Reason="", readiness=true. Elapsed: 18.04323187s Apr 17 00:39:05.299: INFO: Pod "pod-subpath-test-downwardapi-l98v": Phase="Running", Reason="", readiness=true. Elapsed: 20.04791579s Apr 17 00:39:07.303: INFO: Pod "pod-subpath-test-downwardapi-l98v": Phase="Running", Reason="", readiness=true. Elapsed: 22.052053831s Apr 17 00:39:09.307: INFO: Pod "pod-subpath-test-downwardapi-l98v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.056290112s STEP: Saw pod success Apr 17 00:39:09.307: INFO: Pod "pod-subpath-test-downwardapi-l98v" satisfied condition "Succeeded or Failed" Apr 17 00:39:09.310: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-downwardapi-l98v container test-container-subpath-downwardapi-l98v: STEP: delete the pod Apr 17 00:39:09.326: INFO: Waiting for pod pod-subpath-test-downwardapi-l98v to disappear Apr 17 00:39:09.330: INFO: Pod pod-subpath-test-downwardapi-l98v no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-l98v Apr 17 00:39:09.330: INFO: Deleting pod "pod-subpath-test-downwardapi-l98v" in namespace "subpath-3161" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:39:09.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3161" for this suite. • [SLOW TEST:24.171 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":221,"skipped":3920,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:39:09.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 17 00:39:10.133: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 17 00:39:12.174: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722680750, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722680750, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722680750, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722680750, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 17 00:39:15.204: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:39:15.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4019" for this suite. STEP: Destroying namespace "webhook-4019-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.097 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":222,"skipped":3992,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:39:15.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 17 00:39:23.546: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 17 00:39:23.553: INFO: Pod pod-with-poststart-http-hook still exists Apr 17 00:39:25.553: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 17 00:39:25.558: INFO: Pod pod-with-poststart-http-hook still exists Apr 17 00:39:27.553: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 17 00:39:27.558: INFO: Pod pod-with-poststart-http-hook still exists Apr 17 00:39:29.553: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 17 00:39:29.557: INFO: Pod pod-with-poststart-http-hook still exists Apr 17 00:39:31.553: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 17 00:39:31.558: INFO: Pod pod-with-poststart-http-hook still exists Apr 17 00:39:33.553: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 17 00:39:33.557: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:39:33.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8253" for this suite. • [SLOW TEST:18.129 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":223,"skipped":3998,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:39:33.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Apr 17 00:39:33.652: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:39:42.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9014" for this suite. • [SLOW TEST:9.208 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":224,"skipped":4021,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:39:42.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-43d9247e-58be-46dc-a84f-4fe1e12263f2 STEP: Creating configMap with name cm-test-opt-upd-3d62fdf3-294b-453a-ad54-81b994a72a94 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-43d9247e-58be-46dc-a84f-4fe1e12263f2 STEP: Updating configmap cm-test-opt-upd-3d62fdf3-294b-453a-ad54-81b994a72a94 STEP: Creating configMap with name cm-test-opt-create-ec8f5b21-7f55-4624-a77a-52943f89c6a8 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:41:03.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7399" for this suite. • [SLOW TEST:80.579 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":225,"skipped":4036,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:41:03.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 00:41:03.404: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:41:09.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6615" for this suite. • [SLOW TEST:6.595 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":275,"completed":226,"skipped":4041,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:41:09.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:41:26.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6063" for this suite. • [SLOW TEST:16.265 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":227,"skipped":4044,"failed":0} SS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:41:26.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override command Apr 17 00:41:26.276: INFO: Waiting up to 5m0s for pod "client-containers-89940263-5e00-4cc0-b620-00b1eb16521c" in namespace "containers-2813" to be "Succeeded or Failed" Apr 17 00:41:26.321: INFO: Pod "client-containers-89940263-5e00-4cc0-b620-00b1eb16521c": Phase="Pending", Reason="", readiness=false. Elapsed: 44.619554ms Apr 17 00:41:28.325: INFO: Pod "client-containers-89940263-5e00-4cc0-b620-00b1eb16521c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048914784s Apr 17 00:41:30.329: INFO: Pod "client-containers-89940263-5e00-4cc0-b620-00b1eb16521c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053035286s STEP: Saw pod success Apr 17 00:41:30.329: INFO: Pod "client-containers-89940263-5e00-4cc0-b620-00b1eb16521c" satisfied condition "Succeeded or Failed" Apr 17 00:41:30.332: INFO: Trying to get logs from node latest-worker pod client-containers-89940263-5e00-4cc0-b620-00b1eb16521c container test-container: STEP: delete the pod Apr 17 00:41:30.373: INFO: Waiting for pod client-containers-89940263-5e00-4cc0-b620-00b1eb16521c to disappear Apr 17 00:41:30.377: INFO: Pod client-containers-89940263-5e00-4cc0-b620-00b1eb16521c no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:41:30.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2813" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":228,"skipped":4046,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:41:30.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 17 00:41:30.445: INFO: Waiting up to 5m0s for pod "downwardapi-volume-68482978-bde4-4bd2-9328-4485717d38a4" in namespace "projected-6464" to be "Succeeded or Failed" Apr 17 00:41:30.473: INFO: Pod "downwardapi-volume-68482978-bde4-4bd2-9328-4485717d38a4": Phase="Pending", Reason="", readiness=false. Elapsed: 27.794863ms Apr 17 00:41:32.484: INFO: Pod "downwardapi-volume-68482978-bde4-4bd2-9328-4485717d38a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038375356s Apr 17 00:41:34.488: INFO: Pod "downwardapi-volume-68482978-bde4-4bd2-9328-4485717d38a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042706377s STEP: Saw pod success Apr 17 00:41:34.488: INFO: Pod "downwardapi-volume-68482978-bde4-4bd2-9328-4485717d38a4" satisfied condition "Succeeded or Failed" Apr 17 00:41:34.491: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-68482978-bde4-4bd2-9328-4485717d38a4 container client-container: STEP: delete the pod Apr 17 00:41:34.522: INFO: Waiting for pod downwardapi-volume-68482978-bde4-4bd2-9328-4485717d38a4 to disappear Apr 17 00:41:34.532: INFO: Pod downwardapi-volume-68482978-bde4-4bd2-9328-4485717d38a4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:41:34.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6464" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":229,"skipped":4066,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:41:34.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 00:41:34.600: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Apr 17 00:41:36.660: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:41:36.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8894" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":230,"skipped":4099,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:41:36.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Apr 17 00:41:36.824: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-648' Apr 17 00:41:37.387: INFO: stderr: "" Apr 17 00:41:37.387: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 17 00:41:37.387: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-648' Apr 17 00:41:37.530: INFO: stderr: "" Apr 17 00:41:37.530: INFO: stdout: "" STEP: Replicas for name=update-demo: expected=2 actual=0 Apr 17 00:41:42.530: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-648' Apr 17 00:41:42.644: INFO: stderr: "" Apr 17 00:41:42.644: INFO: stdout: "update-demo-nautilus-4qg76 update-demo-nautilus-w8w82 " Apr 17 00:41:42.644: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4qg76 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-648' Apr 17 00:41:42.745: INFO: stderr: "" Apr 17 00:41:42.745: INFO: stdout: "true" Apr 17 00:41:42.745: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4qg76 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-648' Apr 17 00:41:42.842: INFO: stderr: "" Apr 17 00:41:42.842: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 17 00:41:42.842: INFO: validating pod update-demo-nautilus-4qg76 Apr 17 00:41:42.845: INFO: got data: { "image": "nautilus.jpg" } Apr 17 00:41:42.845: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 17 00:41:42.845: INFO: update-demo-nautilus-4qg76 is verified up and running Apr 17 00:41:42.845: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w8w82 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-648' Apr 17 00:41:42.951: INFO: stderr: "" Apr 17 00:41:42.951: INFO: stdout: "true" Apr 17 00:41:42.951: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w8w82 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-648' Apr 17 00:41:43.100: INFO: stderr: "" Apr 17 00:41:43.100: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 17 00:41:43.100: INFO: validating pod update-demo-nautilus-w8w82 Apr 17 00:41:43.106: INFO: got data: { "image": "nautilus.jpg" } Apr 17 00:41:43.106: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 17 00:41:43.106: INFO: update-demo-nautilus-w8w82 is verified up and running STEP: using delete to clean up resources Apr 17 00:41:43.106: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-648' Apr 17 00:41:43.216: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 17 00:41:43.216: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 17 00:41:43.216: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-648' Apr 17 00:41:43.316: INFO: stderr: "No resources found in kubectl-648 namespace.\n" Apr 17 00:41:43.316: INFO: stdout: "" Apr 17 00:41:43.316: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-648 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 17 00:41:43.405: INFO: stderr: "" Apr 17 00:41:43.405: INFO: stdout: "update-demo-nautilus-4qg76\nupdate-demo-nautilus-w8w82\n" Apr 17 00:41:43.905: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-648' Apr 17 00:41:44.006: INFO: stderr: "No resources found in kubectl-648 namespace.\n" Apr 17 00:41:44.006: INFO: stdout: "" Apr 17 00:41:44.006: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-648 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 17 00:41:44.098: INFO: stderr: "" Apr 17 00:41:44.098: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:41:44.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-648" for this suite. • [SLOW TEST:7.334 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":275,"completed":231,"skipped":4114,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:41:44.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:42:01.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1933" for this suite. • [SLOW TEST:17.164 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":232,"skipped":4116,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:42:01.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override all Apr 17 00:42:01.368: INFO: Waiting up to 5m0s for pod "client-containers-8ae01bdc-f935-4d5b-a288-51bdf19aadf6" in namespace "containers-5652" to be "Succeeded or Failed" Apr 17 00:42:01.383: INFO: Pod "client-containers-8ae01bdc-f935-4d5b-a288-51bdf19aadf6": Phase="Pending", Reason="", readiness=false. Elapsed: 15.216317ms Apr 17 00:42:03.387: INFO: Pod "client-containers-8ae01bdc-f935-4d5b-a288-51bdf19aadf6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019361022s Apr 17 00:42:05.391: INFO: Pod "client-containers-8ae01bdc-f935-4d5b-a288-51bdf19aadf6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023245415s STEP: Saw pod success Apr 17 00:42:05.391: INFO: Pod "client-containers-8ae01bdc-f935-4d5b-a288-51bdf19aadf6" satisfied condition "Succeeded or Failed" Apr 17 00:42:05.394: INFO: Trying to get logs from node latest-worker2 pod client-containers-8ae01bdc-f935-4d5b-a288-51bdf19aadf6 container test-container: STEP: delete the pod Apr 17 00:42:05.458: INFO: Waiting for pod client-containers-8ae01bdc-f935-4d5b-a288-51bdf19aadf6 to disappear Apr 17 00:42:05.461: INFO: Pod client-containers-8ae01bdc-f935-4d5b-a288-51bdf19aadf6 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:42:05.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5652" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":233,"skipped":4161,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:42:05.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-f48b754f-45d3-4ecb-8115-7d1996cc137f STEP: Creating secret with name s-test-opt-upd-9128c255-6d08-4b88-9e82-2726e8b1fc88 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-f48b754f-45d3-4ecb-8115-7d1996cc137f STEP: Updating secret s-test-opt-upd-9128c255-6d08-4b88-9e82-2726e8b1fc88 STEP: Creating secret with name s-test-opt-create-2f687de7-3dce-423c-bd28-11dd920760bf STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:42:11.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5981" for this suite. • [SLOW TEST:6.226 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":234,"skipped":4185,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:42:11.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 17 00:42:11.751: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9bce7f57-1757-49bb-8376-164a32390e82" in namespace "downward-api-6193" to be "Succeeded or Failed" Apr 17 00:42:11.767: INFO: Pod "downwardapi-volume-9bce7f57-1757-49bb-8376-164a32390e82": Phase="Pending", Reason="", readiness=false. Elapsed: 16.73904ms Apr 17 00:42:13.842: INFO: Pod "downwardapi-volume-9bce7f57-1757-49bb-8376-164a32390e82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091170673s Apr 17 00:42:15.846: INFO: Pod "downwardapi-volume-9bce7f57-1757-49bb-8376-164a32390e82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.095002055s STEP: Saw pod success Apr 17 00:42:15.846: INFO: Pod "downwardapi-volume-9bce7f57-1757-49bb-8376-164a32390e82" satisfied condition "Succeeded or Failed" Apr 17 00:42:15.849: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-9bce7f57-1757-49bb-8376-164a32390e82 container client-container: STEP: delete the pod Apr 17 00:42:15.880: INFO: Waiting for pod downwardapi-volume-9bce7f57-1757-49bb-8376-164a32390e82 to disappear Apr 17 00:42:15.887: INFO: Pod downwardapi-volume-9bce7f57-1757-49bb-8376-164a32390e82 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:42:15.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6193" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":235,"skipped":4211,"failed":0} SSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:42:15.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 00:42:15.946: INFO: Creating ReplicaSet my-hostname-basic-92e21db9-9681-4794-b3d0-619d7c69116e Apr 17 00:42:15.964: INFO: Pod name my-hostname-basic-92e21db9-9681-4794-b3d0-619d7c69116e: Found 0 pods out of 1 Apr 17 00:42:20.967: INFO: Pod name my-hostname-basic-92e21db9-9681-4794-b3d0-619d7c69116e: Found 1 pods out of 1 Apr 17 00:42:20.967: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-92e21db9-9681-4794-b3d0-619d7c69116e" is running Apr 17 00:42:20.969: INFO: Pod "my-hostname-basic-92e21db9-9681-4794-b3d0-619d7c69116e-lhplg" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-17 00:42:15 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-17 00:42:18 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-17 00:42:18 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-17 00:42:15 +0000 UTC Reason: Message:}]) Apr 17 00:42:20.969: INFO: Trying to dial the pod Apr 17 00:42:25.981: INFO: Controller my-hostname-basic-92e21db9-9681-4794-b3d0-619d7c69116e: Got expected result from replica 1 [my-hostname-basic-92e21db9-9681-4794-b3d0-619d7c69116e-lhplg]: "my-hostname-basic-92e21db9-9681-4794-b3d0-619d7c69116e-lhplg", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:42:25.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-406" for this suite. • [SLOW TEST:10.083 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":236,"skipped":4218,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:42:25.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:42:57.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3943" for this suite. STEP: Destroying namespace "nsdeletetest-6826" for this suite. Apr 17 00:42:57.282: INFO: Namespace nsdeletetest-6826 was already deleted STEP: Destroying namespace "nsdeletetest-5508" for this suite. • [SLOW TEST:31.296 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":237,"skipped":4226,"failed":0} SS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:42:57.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 17 00:42:57.373: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:42:57.390: INFO: Number of nodes with available pods: 0 Apr 17 00:42:57.390: INFO: Node latest-worker is running more than one daemon pod Apr 17 00:42:58.394: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:42:58.398: INFO: Number of nodes with available pods: 0 Apr 17 00:42:58.398: INFO: Node latest-worker is running more than one daemon pod Apr 17 00:42:59.395: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:42:59.397: INFO: Number of nodes with available pods: 0 Apr 17 00:42:59.397: INFO: Node latest-worker is running more than one daemon pod Apr 17 00:43:00.395: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:43:00.398: INFO: Number of nodes with available pods: 0 Apr 17 00:43:00.398: INFO: Node latest-worker is running more than one daemon pod Apr 17 00:43:01.394: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:43:01.398: INFO: Number of nodes with available pods: 2 Apr 17 00:43:01.398: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 17 00:43:01.415: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:43:01.417: INFO: Number of nodes with available pods: 1 Apr 17 00:43:01.417: INFO: Node latest-worker is running more than one daemon pod Apr 17 00:43:02.421: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:43:02.425: INFO: Number of nodes with available pods: 1 Apr 17 00:43:02.426: INFO: Node latest-worker is running more than one daemon pod Apr 17 00:43:03.423: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:43:03.426: INFO: Number of nodes with available pods: 1 Apr 17 00:43:03.426: INFO: Node latest-worker is running more than one daemon pod Apr 17 00:43:04.423: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:43:04.426: INFO: Number of nodes with available pods: 1 Apr 17 00:43:04.426: INFO: Node latest-worker is running more than one daemon pod Apr 17 00:43:05.422: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:43:05.426: INFO: Number of nodes with available pods: 1 Apr 17 00:43:05.426: INFO: Node latest-worker is running more than one daemon pod Apr 17 00:43:06.422: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:43:06.425: INFO: Number of nodes with available pods: 1 Apr 17 00:43:06.425: INFO: Node latest-worker is running more than one daemon pod Apr 17 00:43:07.422: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:43:07.425: INFO: Number of nodes with available pods: 1 Apr 17 00:43:07.425: INFO: Node latest-worker is running more than one daemon pod Apr 17 00:43:08.422: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:43:08.425: INFO: Number of nodes with available pods: 1 Apr 17 00:43:08.425: INFO: Node latest-worker is running more than one daemon pod Apr 17 00:43:09.428: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:43:09.433: INFO: Number of nodes with available pods: 1 Apr 17 00:43:09.433: INFO: Node latest-worker is running more than one daemon pod Apr 17 00:43:10.423: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:43:10.427: INFO: Number of nodes with available pods: 1 Apr 17 00:43:10.427: INFO: Node latest-worker is running more than one daemon pod Apr 17 00:43:11.423: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:43:11.428: INFO: Number of nodes with available pods: 1 Apr 17 00:43:11.428: INFO: Node latest-worker is running more than one daemon pod Apr 17 00:43:12.423: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:43:12.426: INFO: Number of nodes with available pods: 1 Apr 17 00:43:12.426: INFO: Node latest-worker is running more than one daemon pod Apr 17 00:43:13.422: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:43:13.426: INFO: Number of nodes with available pods: 1 Apr 17 00:43:13.426: INFO: Node latest-worker is running more than one daemon pod Apr 17 00:43:14.422: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:43:14.425: INFO: Number of nodes with available pods: 1 Apr 17 00:43:14.426: INFO: Node latest-worker is running more than one daemon pod Apr 17 00:43:15.423: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:43:15.426: INFO: Number of nodes with available pods: 1 Apr 17 00:43:15.426: INFO: Node latest-worker is running more than one daemon pod Apr 17 00:43:16.423: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 17 00:43:16.426: INFO: Number of nodes with available pods: 2 Apr 17 00:43:16.426: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7858, will wait for the garbage collector to delete the pods Apr 17 00:43:16.488: INFO: Deleting DaemonSet.extensions daemon-set took: 6.245855ms Apr 17 00:43:16.788: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.197258ms Apr 17 00:43:22.792: INFO: Number of nodes with available pods: 0 Apr 17 00:43:22.792: INFO: Number of running nodes: 0, number of available pods: 0 Apr 17 00:43:22.795: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7858/daemonsets","resourceVersion":"8678962"},"items":null} Apr 17 00:43:22.797: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7858/pods","resourceVersion":"8678962"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:43:22.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7858" for this suite. • [SLOW TEST:25.547 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":238,"skipped":4228,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:43:22.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 17 00:43:30.979: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 17 00:43:30.984: INFO: Pod pod-with-prestop-exec-hook still exists Apr 17 00:43:32.984: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 17 00:43:32.989: INFO: Pod pod-with-prestop-exec-hook still exists Apr 17 00:43:34.984: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 17 00:43:34.989: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:43:34.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5889" for this suite. • [SLOW TEST:12.171 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":239,"skipped":4242,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:43:35.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 17 00:43:35.070: INFO: Waiting up to 5m0s for pod "pod-5546a918-535f-444a-9336-405c4380b5f4" in namespace "emptydir-9951" to be "Succeeded or Failed" Apr 17 00:43:35.074: INFO: Pod "pod-5546a918-535f-444a-9336-405c4380b5f4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.639535ms Apr 17 00:43:37.078: INFO: Pod "pod-5546a918-535f-444a-9336-405c4380b5f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007399104s Apr 17 00:43:39.082: INFO: Pod "pod-5546a918-535f-444a-9336-405c4380b5f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01180569s STEP: Saw pod success Apr 17 00:43:39.082: INFO: Pod "pod-5546a918-535f-444a-9336-405c4380b5f4" satisfied condition "Succeeded or Failed" Apr 17 00:43:39.085: INFO: Trying to get logs from node latest-worker pod pod-5546a918-535f-444a-9336-405c4380b5f4 container test-container: STEP: delete the pod Apr 17 00:43:39.106: INFO: Waiting for pod pod-5546a918-535f-444a-9336-405c4380b5f4 to disappear Apr 17 00:43:39.116: INFO: Pod pod-5546a918-535f-444a-9336-405c4380b5f4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:43:39.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9951" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":240,"skipped":4266,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:43:39.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 17 00:43:39.214: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-1144' Apr 17 00:43:39.332: INFO: stderr: "" Apr 17 00:43:39.332: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Apr 17 00:43:44.382: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-1144 -o json' Apr 17 00:43:44.488: INFO: stderr: "" Apr 17 00:43:44.488: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-04-17T00:43:39Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-1144\",\n \"resourceVersion\": \"8679107\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-1144/pods/e2e-test-httpd-pod\",\n \"uid\": \"deaae24a-e473-4a01-8cb8-0abf22f4c6f4\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-kzblk\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-kzblk\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-kzblk\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-17T00:43:39Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-17T00:43:41Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-17T00:43:41Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-17T00:43:39Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://ba0ca44dd235eeaa758871ebee8f365a117a031287506525f869c991d96d6437\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-04-17T00:43:41Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.12\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.46\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.46\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-04-17T00:43:39Z\"\n }\n}\n" STEP: replace the image in the pod Apr 17 00:43:44.488: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-1144' Apr 17 00:43:44.721: INFO: stderr: "" Apr 17 00:43:44.721: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Apr 17 00:43:44.727: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-1144' Apr 17 00:43:47.726: INFO: stderr: "" Apr 17 00:43:47.726: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:43:47.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1144" for this suite. • [SLOW TEST:8.620 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":275,"completed":241,"skipped":4281,"failed":0} [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:43:47.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating server pod server in namespace prestop-9600 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-9600 STEP: Deleting pre-stop pod Apr 17 00:44:01.004: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:44:01.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-9600" for this suite. • [SLOW TEST:13.302 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":275,"completed":242,"skipped":4281,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:44:01.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-e2541be5-bdcb-4d4a-84eb-8a379c15282e STEP: Creating a pod to test consume configMaps Apr 17 00:44:01.118: INFO: Waiting up to 5m0s for pod "pod-configmaps-b0f9a89b-571c-4aa1-a8b6-ef097a20b9e6" in namespace "configmap-8685" to be "Succeeded or Failed" Apr 17 00:44:01.123: INFO: Pod "pod-configmaps-b0f9a89b-571c-4aa1-a8b6-ef097a20b9e6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122687ms Apr 17 00:44:03.127: INFO: Pod "pod-configmaps-b0f9a89b-571c-4aa1-a8b6-ef097a20b9e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008263011s Apr 17 00:44:05.131: INFO: Pod "pod-configmaps-b0f9a89b-571c-4aa1-a8b6-ef097a20b9e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01227073s STEP: Saw pod success Apr 17 00:44:05.131: INFO: Pod "pod-configmaps-b0f9a89b-571c-4aa1-a8b6-ef097a20b9e6" satisfied condition "Succeeded or Failed" Apr 17 00:44:05.133: INFO: Trying to get logs from node latest-worker pod pod-configmaps-b0f9a89b-571c-4aa1-a8b6-ef097a20b9e6 container configmap-volume-test: STEP: delete the pod Apr 17 00:44:05.240: INFO: Waiting for pod pod-configmaps-b0f9a89b-571c-4aa1-a8b6-ef097a20b9e6 to disappear Apr 17 00:44:05.248: INFO: Pod pod-configmaps-b0f9a89b-571c-4aa1-a8b6-ef097a20b9e6 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:44:05.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8685" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":243,"skipped":4283,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:44:05.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 00:44:05.303: INFO: Creating deployment "test-recreate-deployment" Apr 17 00:44:05.309: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 17 00:44:05.333: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Apr 17 00:44:07.339: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 17 00:44:07.342: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722681045, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722681045, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722681045, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722681045, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-846c7dd955\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 17 00:44:09.346: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 17 00:44:09.356: INFO: Updating deployment test-recreate-deployment Apr 17 00:44:09.356: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 17 00:44:09.852: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-6713 /apis/apps/v1/namespaces/deployment-6713/deployments/test-recreate-deployment c8b1c8ed-577b-478f-897e-dd299c7a5e57 8679321 2 2020-04-17 00:44:05 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002dd2aa8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-17 00:44:09 +0000 UTC,LastTransitionTime:2020-04-17 00:44:09 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-04-17 00:44:09 +0000 UTC,LastTransitionTime:2020-04-17 00:44:05 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Apr 17 00:44:09.856: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-6713 /apis/apps/v1/namespaces/deployment-6713/replicasets/test-recreate-deployment-5f94c574ff 61523d9d-0383-4178-8bbf-76709e0d73b4 8679318 1 2020-04-17 00:44:09 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment c8b1c8ed-577b-478f-897e-dd299c7a5e57 0xc002dd2ea7 0xc002dd2ea8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002dd2f08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 17 00:44:09.856: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 17 00:44:09.856: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-846c7dd955 deployment-6713 /apis/apps/v1/namespaces/deployment-6713/replicasets/test-recreate-deployment-846c7dd955 aa788634-cfb3-4828-b9c8-5883f5ed26b3 8679309 2 2020-04-17 00:44:05 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment c8b1c8ed-577b-478f-897e-dd299c7a5e57 0xc002dd2f77 0xc002dd2f78}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 846c7dd955,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002dd2fe8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 17 00:44:09.860: INFO: Pod "test-recreate-deployment-5f94c574ff-gsmxb" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-gsmxb test-recreate-deployment-5f94c574ff- deployment-6713 /api/v1/namespaces/deployment-6713/pods/test-recreate-deployment-5f94c574ff-gsmxb c18ed37c-feac-41dd-b900-32f20a14f135 8679323 0 2020-04-17 00:44:09 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 61523d9d-0383-4178-8bbf-76709e0d73b4 0xc002dd34f7 0xc002dd34f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jtxlb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jtxlb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jtxlb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:44:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:44:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:44:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 00:44:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-17 00:44:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:44:09.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6713" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":244,"skipped":4302,"failed":0} S ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:44:09.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-c597583c-e380-4944-ab99-634d7a2af0ae STEP: Creating a pod to test consume secrets Apr 17 00:44:09.957: INFO: Waiting up to 5m0s for pod "pod-secrets-b149ec1c-2285-4f9a-9235-f2f5815770ef" in namespace "secrets-5685" to be "Succeeded or Failed" Apr 17 00:44:10.009: INFO: Pod "pod-secrets-b149ec1c-2285-4f9a-9235-f2f5815770ef": Phase="Pending", Reason="", readiness=false. Elapsed: 51.837838ms Apr 17 00:44:12.058: INFO: Pod "pod-secrets-b149ec1c-2285-4f9a-9235-f2f5815770ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101089835s Apr 17 00:44:14.062: INFO: Pod "pod-secrets-b149ec1c-2285-4f9a-9235-f2f5815770ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.105080707s STEP: Saw pod success Apr 17 00:44:14.063: INFO: Pod "pod-secrets-b149ec1c-2285-4f9a-9235-f2f5815770ef" satisfied condition "Succeeded or Failed" Apr 17 00:44:14.065: INFO: Trying to get logs from node latest-worker pod pod-secrets-b149ec1c-2285-4f9a-9235-f2f5815770ef container secret-env-test: STEP: delete the pod Apr 17 00:44:14.168: INFO: Waiting for pod pod-secrets-b149ec1c-2285-4f9a-9235-f2f5815770ef to disappear Apr 17 00:44:14.219: INFO: Pod pod-secrets-b149ec1c-2285-4f9a-9235-f2f5815770ef no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:44:14.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5685" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":245,"skipped":4303,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:44:14.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-6655/configmap-test-a413ac7b-6a57-4100-89b4-ddd9249c02ff STEP: Creating a pod to test consume configMaps Apr 17 00:44:14.335: INFO: Waiting up to 5m0s for pod "pod-configmaps-dcbc22b3-53a8-4425-85d2-4e60e3c7f354" in namespace "configmap-6655" to be "Succeeded or Failed" Apr 17 00:44:14.339: INFO: Pod "pod-configmaps-dcbc22b3-53a8-4425-85d2-4e60e3c7f354": Phase="Pending", Reason="", readiness=false. Elapsed: 3.462585ms Apr 17 00:44:16.343: INFO: Pod "pod-configmaps-dcbc22b3-53a8-4425-85d2-4e60e3c7f354": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00751976s Apr 17 00:44:18.347: INFO: Pod "pod-configmaps-dcbc22b3-53a8-4425-85d2-4e60e3c7f354": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011507507s STEP: Saw pod success Apr 17 00:44:18.347: INFO: Pod "pod-configmaps-dcbc22b3-53a8-4425-85d2-4e60e3c7f354" satisfied condition "Succeeded or Failed" Apr 17 00:44:18.350: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-dcbc22b3-53a8-4425-85d2-4e60e3c7f354 container env-test: STEP: delete the pod Apr 17 00:44:18.370: INFO: Waiting for pod pod-configmaps-dcbc22b3-53a8-4425-85d2-4e60e3c7f354 to disappear Apr 17 00:44:18.375: INFO: Pod pod-configmaps-dcbc22b3-53a8-4425-85d2-4e60e3c7f354 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:44:18.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6655" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":246,"skipped":4317,"failed":0} SSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:44:18.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 00:44:18.473: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-6a0fa54f-da3f-44d2-bb73-cc24954d83e5" in namespace "security-context-test-4001" to be "Succeeded or Failed" Apr 17 00:44:18.521: INFO: Pod "busybox-privileged-false-6a0fa54f-da3f-44d2-bb73-cc24954d83e5": Phase="Pending", Reason="", readiness=false. Elapsed: 47.994862ms Apr 17 00:44:20.525: INFO: Pod "busybox-privileged-false-6a0fa54f-da3f-44d2-bb73-cc24954d83e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05176666s Apr 17 00:44:22.529: INFO: Pod "busybox-privileged-false-6a0fa54f-da3f-44d2-bb73-cc24954d83e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056064384s Apr 17 00:44:22.529: INFO: Pod "busybox-privileged-false-6a0fa54f-da3f-44d2-bb73-cc24954d83e5" satisfied condition "Succeeded or Failed" Apr 17 00:44:22.540: INFO: Got logs for pod "busybox-privileged-false-6a0fa54f-da3f-44d2-bb73-cc24954d83e5": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:44:22.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4001" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":247,"skipped":4322,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:44:22.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating cluster-info Apr 17 00:44:22.627: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config cluster-info' Apr 17 00:44:22.722: INFO: stderr: "" Apr 17 00:44:22.722: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32771\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32771/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:44:22.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9260" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":275,"completed":248,"skipped":4324,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:44:22.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Apr 17 00:44:22.778: INFO: >>> kubeConfig: /root/.kube/config Apr 17 00:44:24.716: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:44:35.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5845" for this suite. • [SLOW TEST:12.654 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":249,"skipped":4325,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:44:35.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 17 00:44:35.999: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 17 00:44:38.008: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722681076, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722681076, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722681076, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722681075, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 17 00:44:41.042: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:44:41.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6276" for this suite. STEP: Destroying namespace "webhook-6276-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.806 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":250,"skipped":4334,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:44:41.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service nodeport-service with the type=NodePort in namespace services-4907 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4907 STEP: creating replication controller externalsvc in namespace services-4907 I0417 00:44:41.385735 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-4907, replica count: 2 I0417 00:44:44.436196 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0417 00:44:47.436392 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Apr 17 00:44:47.485: INFO: Creating new exec pod Apr 17 00:44:51.515: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-4907 execpoddchgh -- /bin/sh -x -c nslookup nodeport-service' Apr 17 00:44:51.743: INFO: stderr: "I0417 00:44:51.647277 2730 log.go:172] (0xc00003b130) (0xc000bec5a0) Create stream\nI0417 00:44:51.647333 2730 log.go:172] (0xc00003b130) (0xc000bec5a0) Stream added, broadcasting: 1\nI0417 00:44:51.653373 2730 log.go:172] (0xc00003b130) Reply frame received for 1\nI0417 00:44:51.653407 2730 log.go:172] (0xc00003b130) (0xc0007c57c0) Create stream\nI0417 00:44:51.653419 2730 log.go:172] (0xc00003b130) (0xc0007c57c0) Stream added, broadcasting: 3\nI0417 00:44:51.654567 2730 log.go:172] (0xc00003b130) Reply frame received for 3\nI0417 00:44:51.654620 2730 log.go:172] (0xc00003b130) (0xc0005a4be0) Create stream\nI0417 00:44:51.654638 2730 log.go:172] (0xc00003b130) (0xc0005a4be0) Stream added, broadcasting: 5\nI0417 00:44:51.655961 2730 log.go:172] (0xc00003b130) Reply frame received for 5\nI0417 00:44:51.731295 2730 log.go:172] (0xc00003b130) Data frame received for 5\nI0417 00:44:51.731334 2730 log.go:172] (0xc0005a4be0) (5) Data frame handling\nI0417 00:44:51.731360 2730 log.go:172] (0xc0005a4be0) (5) Data frame sent\n+ nslookup nodeport-service\nI0417 00:44:51.736511 2730 log.go:172] (0xc00003b130) Data frame received for 3\nI0417 00:44:51.736532 2730 log.go:172] (0xc0007c57c0) (3) Data frame handling\nI0417 00:44:51.736679 2730 log.go:172] (0xc0007c57c0) (3) Data frame sent\nI0417 00:44:51.737518 2730 log.go:172] (0xc00003b130) Data frame received for 3\nI0417 00:44:51.737542 2730 log.go:172] (0xc0007c57c0) (3) Data frame handling\nI0417 00:44:51.737558 2730 log.go:172] (0xc0007c57c0) (3) Data frame sent\nI0417 00:44:51.738112 2730 log.go:172] (0xc00003b130) Data frame received for 3\nI0417 00:44:51.738181 2730 log.go:172] (0xc0007c57c0) (3) Data frame handling\nI0417 00:44:51.738214 2730 log.go:172] (0xc00003b130) Data frame received for 5\nI0417 00:44:51.738225 2730 log.go:172] (0xc0005a4be0) (5) Data frame handling\nI0417 00:44:51.739621 2730 log.go:172] (0xc00003b130) Data frame received for 1\nI0417 00:44:51.739639 2730 log.go:172] (0xc000bec5a0) (1) Data frame handling\nI0417 00:44:51.739649 2730 log.go:172] (0xc000bec5a0) (1) Data frame sent\nI0417 00:44:51.739677 2730 log.go:172] (0xc00003b130) (0xc000bec5a0) Stream removed, broadcasting: 1\nI0417 00:44:51.739782 2730 log.go:172] (0xc00003b130) Go away received\nI0417 00:44:51.739946 2730 log.go:172] (0xc00003b130) (0xc000bec5a0) Stream removed, broadcasting: 1\nI0417 00:44:51.739964 2730 log.go:172] (0xc00003b130) (0xc0007c57c0) Stream removed, broadcasting: 3\nI0417 00:44:51.739973 2730 log.go:172] (0xc00003b130) (0xc0005a4be0) Stream removed, broadcasting: 5\n" Apr 17 00:44:51.743: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-4907.svc.cluster.local\tcanonical name = externalsvc.services-4907.svc.cluster.local.\nName:\texternalsvc.services-4907.svc.cluster.local\nAddress: 10.96.243.66\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4907, will wait for the garbage collector to delete the pods Apr 17 00:44:51.804: INFO: Deleting ReplicationController externalsvc took: 7.513288ms Apr 17 00:45:00.605: INFO: Terminating ReplicationController externalsvc pods took: 8.80038483s Apr 17 00:45:13.053: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:45:13.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4907" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:31.904 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":251,"skipped":4380,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:45:13.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 00:45:13.139: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 17 00:45:15.061: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6896 create -f -' Apr 17 00:45:18.096: INFO: stderr: "" Apr 17 00:45:18.096: INFO: stdout: "e2e-test-crd-publish-openapi-3062-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 17 00:45:18.096: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6896 delete e2e-test-crd-publish-openapi-3062-crds test-cr' Apr 17 00:45:18.207: INFO: stderr: "" Apr 17 00:45:18.207: INFO: stdout: "e2e-test-crd-publish-openapi-3062-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Apr 17 00:45:18.207: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6896 apply -f -' Apr 17 00:45:18.495: INFO: stderr: "" Apr 17 00:45:18.495: INFO: stdout: "e2e-test-crd-publish-openapi-3062-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 17 00:45:18.495: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6896 delete e2e-test-crd-publish-openapi-3062-crds test-cr' Apr 17 00:45:18.741: INFO: stderr: "" Apr 17 00:45:18.741: INFO: stdout: "e2e-test-crd-publish-openapi-3062-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 17 00:45:18.742: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3062-crds' Apr 17 00:45:19.073: INFO: stderr: "" Apr 17 00:45:19.073: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3062-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:45:21.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6896" for this suite. • [SLOW TEST:8.871 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":252,"skipped":4394,"failed":0} S ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:45:21.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:45:26.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9110" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":253,"skipped":4395,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:45:26.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 17 00:45:26.229: INFO: Waiting up to 5m0s for pod "pod-58f2e5ea-1b5c-4639-9dd7-6cdce7fa55b4" in namespace "emptydir-6048" to be "Succeeded or Failed" Apr 17 00:45:26.245: INFO: Pod "pod-58f2e5ea-1b5c-4639-9dd7-6cdce7fa55b4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.49584ms Apr 17 00:45:28.248: INFO: Pod "pod-58f2e5ea-1b5c-4639-9dd7-6cdce7fa55b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019524964s Apr 17 00:45:30.252: INFO: Pod "pod-58f2e5ea-1b5c-4639-9dd7-6cdce7fa55b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023546791s STEP: Saw pod success Apr 17 00:45:30.252: INFO: Pod "pod-58f2e5ea-1b5c-4639-9dd7-6cdce7fa55b4" satisfied condition "Succeeded or Failed" Apr 17 00:45:30.255: INFO: Trying to get logs from node latest-worker2 pod pod-58f2e5ea-1b5c-4639-9dd7-6cdce7fa55b4 container test-container: STEP: delete the pod Apr 17 00:45:30.288: INFO: Waiting for pod pod-58f2e5ea-1b5c-4639-9dd7-6cdce7fa55b4 to disappear Apr 17 00:45:30.310: INFO: Pod pod-58f2e5ea-1b5c-4639-9dd7-6cdce7fa55b4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:45:30.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6048" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":254,"skipped":4399,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:45:30.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-8cfbb9b8-3ce5-4b7b-af5b-97eb936c314e in namespace container-probe-6713 Apr 17 00:45:34.453: INFO: Started pod liveness-8cfbb9b8-3ce5-4b7b-af5b-97eb936c314e in namespace container-probe-6713 STEP: checking the pod's current state and verifying that restartCount is present Apr 17 00:45:34.461: INFO: Initial restart count of pod liveness-8cfbb9b8-3ce5-4b7b-af5b-97eb936c314e is 0 Apr 17 00:45:52.499: INFO: Restart count of pod container-probe-6713/liveness-8cfbb9b8-3ce5-4b7b-af5b-97eb936c314e is now 1 (18.038419852s elapsed) Apr 17 00:46:12.575: INFO: Restart count of pod container-probe-6713/liveness-8cfbb9b8-3ce5-4b7b-af5b-97eb936c314e is now 2 (38.114356025s elapsed) Apr 17 00:46:32.616: INFO: Restart count of pod container-probe-6713/liveness-8cfbb9b8-3ce5-4b7b-af5b-97eb936c314e is now 3 (58.155404269s elapsed) Apr 17 00:46:52.654: INFO: Restart count of pod container-probe-6713/liveness-8cfbb9b8-3ce5-4b7b-af5b-97eb936c314e is now 4 (1m18.193056845s elapsed) Apr 17 00:48:02.822: INFO: Restart count of pod container-probe-6713/liveness-8cfbb9b8-3ce5-4b7b-af5b-97eb936c314e is now 5 (2m28.361568942s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:48:02.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6713" for this suite. • [SLOW TEST:152.521 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":255,"skipped":4412,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:48:02.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 17 00:48:03.662: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 17 00:48:05.671: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722681283, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722681283, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722681283, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722681283, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 17 00:48:08.684: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Apr 17 00:48:12.730: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config attach --namespace=webhook-2042 to-be-attached-pod -i -c=container1' Apr 17 00:48:12.838: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:48:12.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2042" for this suite. STEP: Destroying namespace "webhook-2042-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.175 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":256,"skipped":4416,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:48:13.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-353 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 17 00:48:13.068: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 17 00:48:13.138: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 17 00:48:15.142: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 17 00:48:17.142: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 00:48:19.142: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 00:48:21.142: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 00:48:23.141: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 00:48:25.141: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 00:48:27.141: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 00:48:29.142: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 17 00:48:29.148: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 17 00:48:31.152: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 17 00:48:33.152: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 17 00:48:35.152: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 17 00:48:39.238: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.90:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-353 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 17 00:48:39.238: INFO: >>> kubeConfig: /root/.kube/config I0417 00:48:39.264591 7 log.go:172] (0xc0030802c0) (0xc0014fc960) Create stream I0417 00:48:39.264619 7 log.go:172] (0xc0030802c0) (0xc0014fc960) Stream added, broadcasting: 1 I0417 00:48:39.266338 7 log.go:172] (0xc0030802c0) Reply frame received for 1 I0417 00:48:39.266374 7 log.go:172] (0xc0030802c0) (0xc0014fcaa0) Create stream I0417 00:48:39.266384 7 log.go:172] (0xc0030802c0) (0xc0014fcaa0) Stream added, broadcasting: 3 I0417 00:48:39.267051 7 log.go:172] (0xc0030802c0) Reply frame received for 3 I0417 00:48:39.267083 7 log.go:172] (0xc0030802c0) (0xc0014fd360) Create stream I0417 00:48:39.267094 7 log.go:172] (0xc0030802c0) (0xc0014fd360) Stream added, broadcasting: 5 I0417 00:48:39.267696 7 log.go:172] (0xc0030802c0) Reply frame received for 5 I0417 00:48:39.330093 7 log.go:172] (0xc0030802c0) Data frame received for 3 I0417 00:48:39.330188 7 log.go:172] (0xc0014fcaa0) (3) Data frame handling I0417 00:48:39.330233 7 log.go:172] (0xc0014fcaa0) (3) Data frame sent I0417 00:48:39.330306 7 log.go:172] (0xc0030802c0) Data frame received for 3 I0417 00:48:39.330333 7 log.go:172] (0xc0014fcaa0) (3) Data frame handling I0417 00:48:39.330365 7 log.go:172] (0xc0030802c0) Data frame received for 5 I0417 00:48:39.330380 7 log.go:172] (0xc0014fd360) (5) Data frame handling I0417 00:48:39.331990 7 log.go:172] (0xc0030802c0) Data frame received for 1 I0417 00:48:39.332016 7 log.go:172] (0xc0014fc960) (1) Data frame handling I0417 00:48:39.332026 7 log.go:172] (0xc0014fc960) (1) Data frame sent I0417 00:48:39.332043 7 log.go:172] (0xc0030802c0) (0xc0014fc960) Stream removed, broadcasting: 1 I0417 00:48:39.332057 7 log.go:172] (0xc0030802c0) Go away received I0417 00:48:39.332203 7 log.go:172] (0xc0030802c0) (0xc0014fc960) Stream removed, broadcasting: 1 I0417 00:48:39.332229 7 log.go:172] (0xc0030802c0) (0xc0014fcaa0) Stream removed, broadcasting: 3 I0417 00:48:39.332248 7 log.go:172] (0xc0030802c0) (0xc0014fd360) Stream removed, broadcasting: 5 Apr 17 00:48:39.332: INFO: Found all expected endpoints: [netserver-0] Apr 17 00:48:39.335: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.53:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-353 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 17 00:48:39.335: INFO: >>> kubeConfig: /root/.kube/config I0417 00:48:39.365777 7 log.go:172] (0xc0027ef810) (0xc001dc1860) Create stream I0417 00:48:39.365804 7 log.go:172] (0xc0027ef810) (0xc001dc1860) Stream added, broadcasting: 1 I0417 00:48:39.367975 7 log.go:172] (0xc0027ef810) Reply frame received for 1 I0417 00:48:39.368018 7 log.go:172] (0xc0027ef810) (0xc001dc1900) Create stream I0417 00:48:39.368032 7 log.go:172] (0xc0027ef810) (0xc001dc1900) Stream added, broadcasting: 3 I0417 00:48:39.368980 7 log.go:172] (0xc0027ef810) Reply frame received for 3 I0417 00:48:39.369021 7 log.go:172] (0xc0027ef810) (0xc001dc19a0) Create stream I0417 00:48:39.369035 7 log.go:172] (0xc0027ef810) (0xc001dc19a0) Stream added, broadcasting: 5 I0417 00:48:39.370417 7 log.go:172] (0xc0027ef810) Reply frame received for 5 I0417 00:48:39.443384 7 log.go:172] (0xc0027ef810) Data frame received for 3 I0417 00:48:39.443430 7 log.go:172] (0xc001dc1900) (3) Data frame handling I0417 00:48:39.443445 7 log.go:172] (0xc001dc1900) (3) Data frame sent I0417 00:48:39.443461 7 log.go:172] (0xc0027ef810) Data frame received for 3 I0417 00:48:39.443470 7 log.go:172] (0xc001dc1900) (3) Data frame handling I0417 00:48:39.443535 7 log.go:172] (0xc0027ef810) Data frame received for 5 I0417 00:48:39.443586 7 log.go:172] (0xc001dc19a0) (5) Data frame handling I0417 00:48:39.444799 7 log.go:172] (0xc0027ef810) Data frame received for 1 I0417 00:48:39.444821 7 log.go:172] (0xc001dc1860) (1) Data frame handling I0417 00:48:39.444828 7 log.go:172] (0xc001dc1860) (1) Data frame sent I0417 00:48:39.444836 7 log.go:172] (0xc0027ef810) (0xc001dc1860) Stream removed, broadcasting: 1 I0417 00:48:39.444844 7 log.go:172] (0xc0027ef810) Go away received I0417 00:48:39.445022 7 log.go:172] (0xc0027ef810) (0xc001dc1860) Stream removed, broadcasting: 1 I0417 00:48:39.445050 7 log.go:172] (0xc0027ef810) (0xc001dc1900) Stream removed, broadcasting: 3 I0417 00:48:39.445067 7 log.go:172] (0xc0027ef810) (0xc001dc19a0) Stream removed, broadcasting: 5 Apr 17 00:48:39.445: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:48:39.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-353" for this suite. • [SLOW TEST:26.432 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":257,"skipped":4440,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:48:39.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 00:48:39.566: INFO: (0) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 18.926309ms) Apr 17 00:48:39.597: INFO: (1) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 30.086652ms) Apr 17 00:48:39.650: INFO: (2) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 53.175041ms) Apr 17 00:48:39.654: INFO: (3) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 4.589467ms) Apr 17 00:48:39.658: INFO: (4) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 4.001238ms) Apr 17 00:48:39.685: INFO: (5) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 26.526602ms) Apr 17 00:48:39.688: INFO: (6) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.84763ms) Apr 17 00:48:39.691: INFO: (7) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.974291ms) Apr 17 00:48:39.694: INFO: (8) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.341444ms) Apr 17 00:48:39.697: INFO: (9) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.968924ms) Apr 17 00:48:39.700: INFO: (10) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.942229ms) Apr 17 00:48:39.703: INFO: (11) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.727548ms) Apr 17 00:48:39.706: INFO: (12) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.977323ms) Apr 17 00:48:39.710: INFO: (13) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.486264ms) Apr 17 00:48:39.713: INFO: (14) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.197212ms) Apr 17 00:48:39.716: INFO: (15) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.016811ms) Apr 17 00:48:39.719: INFO: (16) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.70206ms) Apr 17 00:48:39.722: INFO: (17) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.20621ms) Apr 17 00:48:39.725: INFO: (18) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.796102ms) Apr 17 00:48:39.728: INFO: (19) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.045187ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:48:39.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5482" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":275,"completed":258,"skipped":4456,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:48:39.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:48:44.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9320" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":259,"skipped":4463,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:48:44.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 17 00:48:44.435: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e78c8fcc-e081-4e83-bd85-bb2537b73d70" in namespace "downward-api-6571" to be "Succeeded or Failed" Apr 17 00:48:44.452: INFO: Pod "downwardapi-volume-e78c8fcc-e081-4e83-bd85-bb2537b73d70": Phase="Pending", Reason="", readiness=false. Elapsed: 17.201635ms Apr 17 00:48:46.457: INFO: Pod "downwardapi-volume-e78c8fcc-e081-4e83-bd85-bb2537b73d70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022551568s Apr 17 00:48:48.462: INFO: Pod "downwardapi-volume-e78c8fcc-e081-4e83-bd85-bb2537b73d70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026743401s STEP: Saw pod success Apr 17 00:48:48.462: INFO: Pod "downwardapi-volume-e78c8fcc-e081-4e83-bd85-bb2537b73d70" satisfied condition "Succeeded or Failed" Apr 17 00:48:48.465: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-e78c8fcc-e081-4e83-bd85-bb2537b73d70 container client-container: STEP: delete the pod Apr 17 00:48:48.490: INFO: Waiting for pod downwardapi-volume-e78c8fcc-e081-4e83-bd85-bb2537b73d70 to disappear Apr 17 00:48:48.501: INFO: Pod downwardapi-volume-e78c8fcc-e081-4e83-bd85-bb2537b73d70 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:48:48.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6571" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":260,"skipped":4465,"failed":0} SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:48:48.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-wslx STEP: Creating a pod to test atomic-volume-subpath Apr 17 00:48:48.770: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-wslx" in namespace "subpath-1266" to be "Succeeded or Failed" Apr 17 00:48:48.776: INFO: Pod "pod-subpath-test-configmap-wslx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.618225ms Apr 17 00:48:50.779: INFO: Pod "pod-subpath-test-configmap-wslx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009554938s Apr 17 00:48:52.784: INFO: Pod "pod-subpath-test-configmap-wslx": Phase="Running", Reason="", readiness=true. Elapsed: 4.013829032s Apr 17 00:48:54.788: INFO: Pod "pod-subpath-test-configmap-wslx": Phase="Running", Reason="", readiness=true. Elapsed: 6.018074604s Apr 17 00:48:56.791: INFO: Pod "pod-subpath-test-configmap-wslx": Phase="Running", Reason="", readiness=true. Elapsed: 8.021119556s Apr 17 00:48:58.795: INFO: Pod "pod-subpath-test-configmap-wslx": Phase="Running", Reason="", readiness=true. Elapsed: 10.025294239s Apr 17 00:49:00.799: INFO: Pod "pod-subpath-test-configmap-wslx": Phase="Running", Reason="", readiness=true. Elapsed: 12.029032158s Apr 17 00:49:02.802: INFO: Pod "pod-subpath-test-configmap-wslx": Phase="Running", Reason="", readiness=true. Elapsed: 14.032671803s Apr 17 00:49:04.806: INFO: Pod "pod-subpath-test-configmap-wslx": Phase="Running", Reason="", readiness=true. Elapsed: 16.036296245s Apr 17 00:49:06.809: INFO: Pod "pod-subpath-test-configmap-wslx": Phase="Running", Reason="", readiness=true. Elapsed: 18.039611212s Apr 17 00:49:08.814: INFO: Pod "pod-subpath-test-configmap-wslx": Phase="Running", Reason="", readiness=true. Elapsed: 20.043851945s Apr 17 00:49:10.818: INFO: Pod "pod-subpath-test-configmap-wslx": Phase="Running", Reason="", readiness=true. Elapsed: 22.047950757s Apr 17 00:49:12.822: INFO: Pod "pod-subpath-test-configmap-wslx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.052245796s STEP: Saw pod success Apr 17 00:49:12.822: INFO: Pod "pod-subpath-test-configmap-wslx" satisfied condition "Succeeded or Failed" Apr 17 00:49:12.825: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-wslx container test-container-subpath-configmap-wslx: STEP: delete the pod Apr 17 00:49:12.866: INFO: Waiting for pod pod-subpath-test-configmap-wslx to disappear Apr 17 00:49:12.876: INFO: Pod pod-subpath-test-configmap-wslx no longer exists STEP: Deleting pod pod-subpath-test-configmap-wslx Apr 17 00:49:12.876: INFO: Deleting pod "pod-subpath-test-configmap-wslx" in namespace "subpath-1266" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:49:12.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1266" for this suite. • [SLOW TEST:24.371 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":261,"skipped":4471,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:49:12.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-b3088a70-d86c-49f8-ac51-fa8c3f9f7ea1 STEP: Creating secret with name s-test-opt-upd-56d961c3-1298-48e9-b7e9-887b95eda97e STEP: Creating the pod STEP: Deleting secret s-test-opt-del-b3088a70-d86c-49f8-ac51-fa8c3f9f7ea1 STEP: Updating secret s-test-opt-upd-56d961c3-1298-48e9-b7e9-887b95eda97e STEP: Creating secret with name s-test-opt-create-5e55808d-f25c-4751-a267-9a85f5f0d37a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:50:33.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8111" for this suite. • [SLOW TEST:80.528 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":262,"skipped":4484,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:50:33.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 17 00:50:33.476: INFO: Waiting up to 5m0s for pod "pod-dd58daca-9d97-43e2-99ea-4fb7893c2c52" in namespace "emptydir-1755" to be "Succeeded or Failed" Apr 17 00:50:33.491: INFO: Pod "pod-dd58daca-9d97-43e2-99ea-4fb7893c2c52": Phase="Pending", Reason="", readiness=false. Elapsed: 14.88397ms Apr 17 00:50:35.495: INFO: Pod "pod-dd58daca-9d97-43e2-99ea-4fb7893c2c52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018472513s Apr 17 00:50:37.499: INFO: Pod "pod-dd58daca-9d97-43e2-99ea-4fb7893c2c52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02277221s STEP: Saw pod success Apr 17 00:50:37.499: INFO: Pod "pod-dd58daca-9d97-43e2-99ea-4fb7893c2c52" satisfied condition "Succeeded or Failed" Apr 17 00:50:37.502: INFO: Trying to get logs from node latest-worker pod pod-dd58daca-9d97-43e2-99ea-4fb7893c2c52 container test-container: STEP: delete the pod Apr 17 00:50:37.522: INFO: Waiting for pod pod-dd58daca-9d97-43e2-99ea-4fb7893c2c52 to disappear Apr 17 00:50:37.527: INFO: Pod pod-dd58daca-9d97-43e2-99ea-4fb7893c2c52 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:50:37.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1755" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":263,"skipped":4500,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:50:37.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 17 00:50:37.616: INFO: Waiting up to 5m0s for pod "downwardapi-volume-124012d0-e3b6-4745-b4b2-34e634c22629" in namespace "downward-api-2720" to be "Succeeded or Failed" Apr 17 00:50:37.623: INFO: Pod "downwardapi-volume-124012d0-e3b6-4745-b4b2-34e634c22629": Phase="Pending", Reason="", readiness=false. Elapsed: 6.714649ms Apr 17 00:50:39.628: INFO: Pod "downwardapi-volume-124012d0-e3b6-4745-b4b2-34e634c22629": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011068271s Apr 17 00:50:41.634: INFO: Pod "downwardapi-volume-124012d0-e3b6-4745-b4b2-34e634c22629": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017143067s STEP: Saw pod success Apr 17 00:50:41.634: INFO: Pod "downwardapi-volume-124012d0-e3b6-4745-b4b2-34e634c22629" satisfied condition "Succeeded or Failed" Apr 17 00:50:41.675: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-124012d0-e3b6-4745-b4b2-34e634c22629 container client-container: STEP: delete the pod Apr 17 00:50:41.899: INFO: Waiting for pod downwardapi-volume-124012d0-e3b6-4745-b4b2-34e634c22629 to disappear Apr 17 00:50:42.094: INFO: Pod downwardapi-volume-124012d0-e3b6-4745-b4b2-34e634c22629 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:50:42.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2720" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":264,"skipped":4514,"failed":0} SS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:50:42.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Apr 17 00:50:47.240: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:50:47.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8878" for this suite. • [SLOW TEST:5.217 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":265,"skipped":4516,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:50:47.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-9767 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-9767 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9767 Apr 17 00:50:47.442: INFO: Found 0 stateful pods, waiting for 1 Apr 17 00:50:57.446: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 17 00:50:57.450: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9767 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 17 00:50:57.682: INFO: stderr: "I0417 00:50:57.579990 2878 log.go:172] (0xc00094e160) (0xc000930000) Create stream\nI0417 00:50:57.580048 2878 log.go:172] (0xc00094e160) (0xc000930000) Stream added, broadcasting: 1\nI0417 00:50:57.582171 2878 log.go:172] (0xc00094e160) Reply frame received for 1\nI0417 00:50:57.582198 2878 log.go:172] (0xc00094e160) (0xc0009300a0) Create stream\nI0417 00:50:57.582209 2878 log.go:172] (0xc00094e160) (0xc0009300a0) Stream added, broadcasting: 3\nI0417 00:50:57.583116 2878 log.go:172] (0xc00094e160) Reply frame received for 3\nI0417 00:50:57.583148 2878 log.go:172] (0xc00094e160) (0xc0009c4000) Create stream\nI0417 00:50:57.583158 2878 log.go:172] (0xc00094e160) (0xc0009c4000) Stream added, broadcasting: 5\nI0417 00:50:57.584142 2878 log.go:172] (0xc00094e160) Reply frame received for 5\nI0417 00:50:57.649499 2878 log.go:172] (0xc00094e160) Data frame received for 5\nI0417 00:50:57.649529 2878 log.go:172] (0xc0009c4000) (5) Data frame handling\nI0417 00:50:57.649542 2878 log.go:172] (0xc0009c4000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0417 00:50:57.675706 2878 log.go:172] (0xc00094e160) Data frame received for 5\nI0417 00:50:57.675748 2878 log.go:172] (0xc0009c4000) (5) Data frame handling\nI0417 00:50:57.675788 2878 log.go:172] (0xc00094e160) Data frame received for 3\nI0417 00:50:57.675807 2878 log.go:172] (0xc0009300a0) (3) Data frame handling\nI0417 00:50:57.675821 2878 log.go:172] (0xc0009300a0) (3) Data frame sent\nI0417 00:50:57.675832 2878 log.go:172] (0xc00094e160) Data frame received for 3\nI0417 00:50:57.675844 2878 log.go:172] (0xc0009300a0) (3) Data frame handling\nI0417 00:50:57.677593 2878 log.go:172] (0xc00094e160) Data frame received for 1\nI0417 00:50:57.677620 2878 log.go:172] (0xc000930000) (1) Data frame handling\nI0417 00:50:57.677638 2878 log.go:172] (0xc000930000) (1) Data frame sent\nI0417 00:50:57.677649 2878 log.go:172] (0xc00094e160) (0xc000930000) Stream removed, broadcasting: 1\nI0417 00:50:57.677660 2878 log.go:172] (0xc00094e160) Go away received\nI0417 00:50:57.678058 2878 log.go:172] (0xc00094e160) (0xc000930000) Stream removed, broadcasting: 1\nI0417 00:50:57.678077 2878 log.go:172] (0xc00094e160) (0xc0009300a0) Stream removed, broadcasting: 3\nI0417 00:50:57.678086 2878 log.go:172] (0xc00094e160) (0xc0009c4000) Stream removed, broadcasting: 5\n" Apr 17 00:50:57.682: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 17 00:50:57.682: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 17 00:50:57.685: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 17 00:51:07.689: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 17 00:51:07.690: INFO: Waiting for statefulset status.replicas updated to 0 Apr 17 00:51:07.711: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999585s Apr 17 00:51:08.715: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.987465541s Apr 17 00:51:09.719: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.982985641s Apr 17 00:51:10.724: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.979146267s Apr 17 00:51:11.728: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.974429359s Apr 17 00:51:12.732: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.970174632s Apr 17 00:51:13.736: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.965746167s Apr 17 00:51:14.740: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.962261714s Apr 17 00:51:15.746: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.957916256s Apr 17 00:51:16.753: INFO: Verifying statefulset ss doesn't scale past 1 for another 952.015609ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9767 Apr 17 00:51:17.758: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9767 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 17 00:51:17.961: INFO: stderr: "I0417 00:51:17.884159 2899 log.go:172] (0xc000988dc0) (0xc000976640) Create stream\nI0417 00:51:17.884212 2899 log.go:172] (0xc000988dc0) (0xc000976640) Stream added, broadcasting: 1\nI0417 00:51:17.889336 2899 log.go:172] (0xc000988dc0) Reply frame received for 1\nI0417 00:51:17.889372 2899 log.go:172] (0xc000988dc0) (0xc0005bf540) Create stream\nI0417 00:51:17.889383 2899 log.go:172] (0xc000988dc0) (0xc0005bf540) Stream added, broadcasting: 3\nI0417 00:51:17.890352 2899 log.go:172] (0xc000988dc0) Reply frame received for 3\nI0417 00:51:17.890381 2899 log.go:172] (0xc000988dc0) (0xc00002c960) Create stream\nI0417 00:51:17.890390 2899 log.go:172] (0xc000988dc0) (0xc00002c960) Stream added, broadcasting: 5\nI0417 00:51:17.891137 2899 log.go:172] (0xc000988dc0) Reply frame received for 5\nI0417 00:51:17.953929 2899 log.go:172] (0xc000988dc0) Data frame received for 3\nI0417 00:51:17.953986 2899 log.go:172] (0xc0005bf540) (3) Data frame handling\nI0417 00:51:17.954010 2899 log.go:172] (0xc0005bf540) (3) Data frame sent\nI0417 00:51:17.954030 2899 log.go:172] (0xc000988dc0) Data frame received for 3\nI0417 00:51:17.954048 2899 log.go:172] (0xc0005bf540) (3) Data frame handling\nI0417 00:51:17.954102 2899 log.go:172] (0xc000988dc0) Data frame received for 5\nI0417 00:51:17.954121 2899 log.go:172] (0xc00002c960) (5) Data frame handling\nI0417 00:51:17.954135 2899 log.go:172] (0xc00002c960) (5) Data frame sent\nI0417 00:51:17.954160 2899 log.go:172] (0xc000988dc0) Data frame received for 5\nI0417 00:51:17.954185 2899 log.go:172] (0xc00002c960) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0417 00:51:17.955703 2899 log.go:172] (0xc000988dc0) Data frame received for 1\nI0417 00:51:17.955727 2899 log.go:172] (0xc000976640) (1) Data frame handling\nI0417 00:51:17.955753 2899 log.go:172] (0xc000976640) (1) Data frame sent\nI0417 00:51:17.955771 2899 log.go:172] (0xc000988dc0) (0xc000976640) Stream removed, broadcasting: 1\nI0417 00:51:17.955895 2899 log.go:172] (0xc000988dc0) Go away received\nI0417 00:51:17.956091 2899 log.go:172] (0xc000988dc0) (0xc000976640) Stream removed, broadcasting: 1\nI0417 00:51:17.956105 2899 log.go:172] (0xc000988dc0) (0xc0005bf540) Stream removed, broadcasting: 3\nI0417 00:51:17.956114 2899 log.go:172] (0xc000988dc0) (0xc00002c960) Stream removed, broadcasting: 5\n" Apr 17 00:51:17.961: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 17 00:51:17.961: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 17 00:51:17.964: INFO: Found 1 stateful pods, waiting for 3 Apr 17 00:51:27.969: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 17 00:51:27.969: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 17 00:51:27.970: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Apr 17 00:51:27.975: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9767 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 17 00:51:28.195: INFO: stderr: "I0417 00:51:28.116534 2918 log.go:172] (0xc000a4aa50) (0xc000a30280) Create stream\nI0417 00:51:28.116597 2918 log.go:172] (0xc000a4aa50) (0xc000a30280) Stream added, broadcasting: 1\nI0417 00:51:28.119658 2918 log.go:172] (0xc000a4aa50) Reply frame received for 1\nI0417 00:51:28.119706 2918 log.go:172] (0xc000a4aa50) (0xc0005b14a0) Create stream\nI0417 00:51:28.119718 2918 log.go:172] (0xc000a4aa50) (0xc0005b14a0) Stream added, broadcasting: 3\nI0417 00:51:28.120672 2918 log.go:172] (0xc000a4aa50) Reply frame received for 3\nI0417 00:51:28.120708 2918 log.go:172] (0xc000a4aa50) (0xc000a303c0) Create stream\nI0417 00:51:28.120719 2918 log.go:172] (0xc000a4aa50) (0xc000a303c0) Stream added, broadcasting: 5\nI0417 00:51:28.122025 2918 log.go:172] (0xc000a4aa50) Reply frame received for 5\nI0417 00:51:28.189687 2918 log.go:172] (0xc000a4aa50) Data frame received for 5\nI0417 00:51:28.189747 2918 log.go:172] (0xc000a303c0) (5) Data frame handling\nI0417 00:51:28.189774 2918 log.go:172] (0xc000a303c0) (5) Data frame sent\nI0417 00:51:28.189792 2918 log.go:172] (0xc000a4aa50) Data frame received for 5\nI0417 00:51:28.189808 2918 log.go:172] (0xc000a303c0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0417 00:51:28.189833 2918 log.go:172] (0xc000a4aa50) Data frame received for 3\nI0417 00:51:28.189852 2918 log.go:172] (0xc0005b14a0) (3) Data frame handling\nI0417 00:51:28.189885 2918 log.go:172] (0xc0005b14a0) (3) Data frame sent\nI0417 00:51:28.189906 2918 log.go:172] (0xc000a4aa50) Data frame received for 3\nI0417 00:51:28.189916 2918 log.go:172] (0xc0005b14a0) (3) Data frame handling\nI0417 00:51:28.191344 2918 log.go:172] (0xc000a4aa50) Data frame received for 1\nI0417 00:51:28.191367 2918 log.go:172] (0xc000a30280) (1) Data frame handling\nI0417 00:51:28.191375 2918 log.go:172] (0xc000a30280) (1) Data frame sent\nI0417 00:51:28.191386 2918 log.go:172] (0xc000a4aa50) (0xc000a30280) Stream removed, broadcasting: 1\nI0417 00:51:28.191395 2918 log.go:172] (0xc000a4aa50) Go away received\nI0417 00:51:28.191724 2918 log.go:172] (0xc000a4aa50) (0xc000a30280) Stream removed, broadcasting: 1\nI0417 00:51:28.191739 2918 log.go:172] (0xc000a4aa50) (0xc0005b14a0) Stream removed, broadcasting: 3\nI0417 00:51:28.191747 2918 log.go:172] (0xc000a4aa50) (0xc000a303c0) Stream removed, broadcasting: 5\n" Apr 17 00:51:28.195: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 17 00:51:28.195: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 17 00:51:28.195: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9767 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 17 00:51:28.423: INFO: stderr: "I0417 00:51:28.321744 2941 log.go:172] (0xc00052e6e0) (0xc0006ef360) Create stream\nI0417 00:51:28.321791 2941 log.go:172] (0xc00052e6e0) (0xc0006ef360) Stream added, broadcasting: 1\nI0417 00:51:28.323744 2941 log.go:172] (0xc00052e6e0) Reply frame received for 1\nI0417 00:51:28.323774 2941 log.go:172] (0xc00052e6e0) (0xc0005bb4a0) Create stream\nI0417 00:51:28.323784 2941 log.go:172] (0xc00052e6e0) (0xc0005bb4a0) Stream added, broadcasting: 3\nI0417 00:51:28.324519 2941 log.go:172] (0xc00052e6e0) Reply frame received for 3\nI0417 00:51:28.324565 2941 log.go:172] (0xc00052e6e0) (0xc000b74000) Create stream\nI0417 00:51:28.324576 2941 log.go:172] (0xc00052e6e0) (0xc000b74000) Stream added, broadcasting: 5\nI0417 00:51:28.325545 2941 log.go:172] (0xc00052e6e0) Reply frame received for 5\nI0417 00:51:28.389783 2941 log.go:172] (0xc00052e6e0) Data frame received for 5\nI0417 00:51:28.389812 2941 log.go:172] (0xc000b74000) (5) Data frame handling\nI0417 00:51:28.389834 2941 log.go:172] (0xc000b74000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0417 00:51:28.417352 2941 log.go:172] (0xc00052e6e0) Data frame received for 3\nI0417 00:51:28.417380 2941 log.go:172] (0xc0005bb4a0) (3) Data frame handling\nI0417 00:51:28.417397 2941 log.go:172] (0xc0005bb4a0) (3) Data frame sent\nI0417 00:51:28.417406 2941 log.go:172] (0xc00052e6e0) Data frame received for 3\nI0417 00:51:28.417416 2941 log.go:172] (0xc0005bb4a0) (3) Data frame handling\nI0417 00:51:28.417450 2941 log.go:172] (0xc00052e6e0) Data frame received for 5\nI0417 00:51:28.417468 2941 log.go:172] (0xc000b74000) (5) Data frame handling\nI0417 00:51:28.419471 2941 log.go:172] (0xc00052e6e0) Data frame received for 1\nI0417 00:51:28.419486 2941 log.go:172] (0xc0006ef360) (1) Data frame handling\nI0417 00:51:28.419500 2941 log.go:172] (0xc0006ef360) (1) Data frame sent\nI0417 00:51:28.419574 2941 log.go:172] (0xc00052e6e0) (0xc0006ef360) Stream removed, broadcasting: 1\nI0417 00:51:28.419751 2941 log.go:172] (0xc00052e6e0) Go away received\nI0417 00:51:28.419823 2941 log.go:172] (0xc00052e6e0) (0xc0006ef360) Stream removed, broadcasting: 1\nI0417 00:51:28.419836 2941 log.go:172] (0xc00052e6e0) (0xc0005bb4a0) Stream removed, broadcasting: 3\nI0417 00:51:28.419842 2941 log.go:172] (0xc00052e6e0) (0xc000b74000) Stream removed, broadcasting: 5\n" Apr 17 00:51:28.423: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 17 00:51:28.423: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 17 00:51:28.423: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9767 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 17 00:51:28.652: INFO: stderr: "I0417 00:51:28.544583 2962 log.go:172] (0xc000ad4000) (0xc000a1c000) Create stream\nI0417 00:51:28.544655 2962 log.go:172] (0xc000ad4000) (0xc000a1c000) Stream added, broadcasting: 1\nI0417 00:51:28.546564 2962 log.go:172] (0xc000ad4000) Reply frame received for 1\nI0417 00:51:28.546623 2962 log.go:172] (0xc000ad4000) (0xc000a1c0a0) Create stream\nI0417 00:51:28.546640 2962 log.go:172] (0xc000ad4000) (0xc000a1c0a0) Stream added, broadcasting: 3\nI0417 00:51:28.547527 2962 log.go:172] (0xc000ad4000) Reply frame received for 3\nI0417 00:51:28.547576 2962 log.go:172] (0xc000ad4000) (0xc000a08000) Create stream\nI0417 00:51:28.547592 2962 log.go:172] (0xc000ad4000) (0xc000a08000) Stream added, broadcasting: 5\nI0417 00:51:28.548752 2962 log.go:172] (0xc000ad4000) Reply frame received for 5\nI0417 00:51:28.609995 2962 log.go:172] (0xc000ad4000) Data frame received for 5\nI0417 00:51:28.610031 2962 log.go:172] (0xc000a08000) (5) Data frame handling\nI0417 00:51:28.610052 2962 log.go:172] (0xc000a08000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0417 00:51:28.645996 2962 log.go:172] (0xc000ad4000) Data frame received for 3\nI0417 00:51:28.646028 2962 log.go:172] (0xc000a1c0a0) (3) Data frame handling\nI0417 00:51:28.646047 2962 log.go:172] (0xc000a1c0a0) (3) Data frame sent\nI0417 00:51:28.646275 2962 log.go:172] (0xc000ad4000) Data frame received for 3\nI0417 00:51:28.646322 2962 log.go:172] (0xc000a1c0a0) (3) Data frame handling\nI0417 00:51:28.646476 2962 log.go:172] (0xc000ad4000) Data frame received for 5\nI0417 00:51:28.646548 2962 log.go:172] (0xc000a08000) (5) Data frame handling\nI0417 00:51:28.648275 2962 log.go:172] (0xc000ad4000) Data frame received for 1\nI0417 00:51:28.648302 2962 log.go:172] (0xc000a1c000) (1) Data frame handling\nI0417 00:51:28.648313 2962 log.go:172] (0xc000a1c000) (1) Data frame sent\nI0417 00:51:28.648326 2962 log.go:172] (0xc000ad4000) (0xc000a1c000) Stream removed, broadcasting: 1\nI0417 00:51:28.648349 2962 log.go:172] (0xc000ad4000) Go away received\nI0417 00:51:28.648826 2962 log.go:172] (0xc000ad4000) (0xc000a1c000) Stream removed, broadcasting: 1\nI0417 00:51:28.648843 2962 log.go:172] (0xc000ad4000) (0xc000a1c0a0) Stream removed, broadcasting: 3\nI0417 00:51:28.648851 2962 log.go:172] (0xc000ad4000) (0xc000a08000) Stream removed, broadcasting: 5\n" Apr 17 00:51:28.652: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 17 00:51:28.653: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 17 00:51:28.653: INFO: Waiting for statefulset status.replicas updated to 0 Apr 17 00:51:28.694: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 17 00:51:38.702: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 17 00:51:38.702: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 17 00:51:38.702: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 17 00:51:38.717: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999629s Apr 17 00:51:39.722: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993510187s Apr 17 00:51:40.729: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.989120196s Apr 17 00:51:41.735: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.981497528s Apr 17 00:51:42.739: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.976428046s Apr 17 00:51:43.744: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.971481311s Apr 17 00:51:44.748: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.966693091s Apr 17 00:51:45.753: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.963237982s Apr 17 00:51:46.783: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.958106891s Apr 17 00:51:47.787: INFO: Verifying statefulset ss doesn't scale past 3 for another 927.543497ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9767 Apr 17 00:51:48.808: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9767 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 17 00:51:49.047: INFO: stderr: "I0417 00:51:48.946122 2981 log.go:172] (0xc00003a840) (0xc0007eb360) Create stream\nI0417 00:51:48.946197 2981 log.go:172] (0xc00003a840) (0xc0007eb360) Stream added, broadcasting: 1\nI0417 00:51:48.948757 2981 log.go:172] (0xc00003a840) Reply frame received for 1\nI0417 00:51:48.948801 2981 log.go:172] (0xc00003a840) (0xc000a6c000) Create stream\nI0417 00:51:48.948814 2981 log.go:172] (0xc00003a840) (0xc000a6c000) Stream added, broadcasting: 3\nI0417 00:51:48.950082 2981 log.go:172] (0xc00003a840) Reply frame received for 3\nI0417 00:51:48.950137 2981 log.go:172] (0xc00003a840) (0xc0007eb400) Create stream\nI0417 00:51:48.950164 2981 log.go:172] (0xc00003a840) (0xc0007eb400) Stream added, broadcasting: 5\nI0417 00:51:48.951192 2981 log.go:172] (0xc00003a840) Reply frame received for 5\nI0417 00:51:49.040058 2981 log.go:172] (0xc00003a840) Data frame received for 5\nI0417 00:51:49.040092 2981 log.go:172] (0xc0007eb400) (5) Data frame handling\nI0417 00:51:49.040103 2981 log.go:172] (0xc0007eb400) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0417 00:51:49.040139 2981 log.go:172] (0xc00003a840) Data frame received for 3\nI0417 00:51:49.040167 2981 log.go:172] (0xc000a6c000) (3) Data frame handling\nI0417 00:51:49.040186 2981 log.go:172] (0xc000a6c000) (3) Data frame sent\nI0417 00:51:49.040194 2981 log.go:172] (0xc00003a840) Data frame received for 3\nI0417 00:51:49.040201 2981 log.go:172] (0xc000a6c000) (3) Data frame handling\nI0417 00:51:49.040385 2981 log.go:172] (0xc00003a840) Data frame received for 5\nI0417 00:51:49.040406 2981 log.go:172] (0xc0007eb400) (5) Data frame handling\nI0417 00:51:49.041765 2981 log.go:172] (0xc00003a840) Data frame received for 1\nI0417 00:51:49.041789 2981 log.go:172] (0xc0007eb360) (1) Data frame handling\nI0417 00:51:49.041817 2981 log.go:172] (0xc0007eb360) (1) Data frame sent\nI0417 00:51:49.041843 2981 log.go:172] (0xc00003a840) (0xc0007eb360) Stream removed, broadcasting: 1\nI0417 00:51:49.041873 2981 log.go:172] (0xc00003a840) Go away received\nI0417 00:51:49.042346 2981 log.go:172] (0xc00003a840) (0xc0007eb360) Stream removed, broadcasting: 1\nI0417 00:51:49.042367 2981 log.go:172] (0xc00003a840) (0xc000a6c000) Stream removed, broadcasting: 3\nI0417 00:51:49.042378 2981 log.go:172] (0xc00003a840) (0xc0007eb400) Stream removed, broadcasting: 5\n" Apr 17 00:51:49.047: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 17 00:51:49.047: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 17 00:51:49.047: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9767 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 17 00:51:49.251: INFO: stderr: "I0417 00:51:49.180977 3003 log.go:172] (0xc000bac000) (0xc0007eb720) Create stream\nI0417 00:51:49.181058 3003 log.go:172] (0xc000bac000) (0xc0007eb720) Stream added, broadcasting: 1\nI0417 00:51:49.182910 3003 log.go:172] (0xc000bac000) Reply frame received for 1\nI0417 00:51:49.182947 3003 log.go:172] (0xc000bac000) (0xc0005e4b40) Create stream\nI0417 00:51:49.182958 3003 log.go:172] (0xc000bac000) (0xc0005e4b40) Stream added, broadcasting: 3\nI0417 00:51:49.183794 3003 log.go:172] (0xc000bac000) Reply frame received for 3\nI0417 00:51:49.183832 3003 log.go:172] (0xc000bac000) (0xc0005e4be0) Create stream\nI0417 00:51:49.183843 3003 log.go:172] (0xc000bac000) (0xc0005e4be0) Stream added, broadcasting: 5\nI0417 00:51:49.184687 3003 log.go:172] (0xc000bac000) Reply frame received for 5\nI0417 00:51:49.244162 3003 log.go:172] (0xc000bac000) Data frame received for 3\nI0417 00:51:49.244196 3003 log.go:172] (0xc0005e4b40) (3) Data frame handling\nI0417 00:51:49.244210 3003 log.go:172] (0xc0005e4b40) (3) Data frame sent\nI0417 00:51:49.244217 3003 log.go:172] (0xc000bac000) Data frame received for 3\nI0417 00:51:49.244224 3003 log.go:172] (0xc0005e4b40) (3) Data frame handling\nI0417 00:51:49.244307 3003 log.go:172] (0xc000bac000) Data frame received for 5\nI0417 00:51:49.244317 3003 log.go:172] (0xc0005e4be0) (5) Data frame handling\nI0417 00:51:49.244326 3003 log.go:172] (0xc0005e4be0) (5) Data frame sent\nI0417 00:51:49.244333 3003 log.go:172] (0xc000bac000) Data frame received for 5\nI0417 00:51:49.244340 3003 log.go:172] (0xc0005e4be0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0417 00:51:49.246382 3003 log.go:172] (0xc000bac000) Data frame received for 1\nI0417 00:51:49.246417 3003 log.go:172] (0xc0007eb720) (1) Data frame handling\nI0417 00:51:49.246434 3003 log.go:172] (0xc0007eb720) (1) Data frame sent\nI0417 00:51:49.246486 3003 log.go:172] (0xc000bac000) (0xc0007eb720) Stream removed, broadcasting: 1\nI0417 00:51:49.246601 3003 log.go:172] (0xc000bac000) Go away received\nI0417 00:51:49.246848 3003 log.go:172] (0xc000bac000) (0xc0007eb720) Stream removed, broadcasting: 1\nI0417 00:51:49.246874 3003 log.go:172] (0xc000bac000) (0xc0005e4b40) Stream removed, broadcasting: 3\nI0417 00:51:49.246888 3003 log.go:172] (0xc000bac000) (0xc0005e4be0) Stream removed, broadcasting: 5\n" Apr 17 00:51:49.252: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 17 00:51:49.252: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 17 00:51:49.252: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9767 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 17 00:51:49.499: INFO: stderr: "I0417 00:51:49.398670 3024 log.go:172] (0xc0007d6b00) (0xc00077c140) Create stream\nI0417 00:51:49.398746 3024 log.go:172] (0xc0007d6b00) (0xc00077c140) Stream added, broadcasting: 1\nI0417 00:51:49.401732 3024 log.go:172] (0xc0007d6b00) Reply frame received for 1\nI0417 00:51:49.401770 3024 log.go:172] (0xc0007d6b00) (0xc0006892c0) Create stream\nI0417 00:51:49.401778 3024 log.go:172] (0xc0007d6b00) (0xc0006892c0) Stream added, broadcasting: 3\nI0417 00:51:49.402875 3024 log.go:172] (0xc0007d6b00) Reply frame received for 3\nI0417 00:51:49.402934 3024 log.go:172] (0xc0007d6b00) (0xc000546000) Create stream\nI0417 00:51:49.402953 3024 log.go:172] (0xc0007d6b00) (0xc000546000) Stream added, broadcasting: 5\nI0417 00:51:49.404164 3024 log.go:172] (0xc0007d6b00) Reply frame received for 5\nI0417 00:51:49.487434 3024 log.go:172] (0xc0007d6b00) Data frame received for 5\nI0417 00:51:49.487456 3024 log.go:172] (0xc000546000) (5) Data frame handling\nI0417 00:51:49.487470 3024 log.go:172] (0xc000546000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0417 00:51:49.491093 3024 log.go:172] (0xc0007d6b00) Data frame received for 5\nI0417 00:51:49.491139 3024 log.go:172] (0xc000546000) (5) Data frame handling\nI0417 00:51:49.491171 3024 log.go:172] (0xc0007d6b00) Data frame received for 3\nI0417 00:51:49.491192 3024 log.go:172] (0xc0006892c0) (3) Data frame handling\nI0417 00:51:49.491208 3024 log.go:172] (0xc0006892c0) (3) Data frame sent\nI0417 00:51:49.491221 3024 log.go:172] (0xc0007d6b00) Data frame received for 3\nI0417 00:51:49.491239 3024 log.go:172] (0xc0006892c0) (3) Data frame handling\nI0417 00:51:49.492493 3024 log.go:172] (0xc0007d6b00) Data frame received for 1\nI0417 00:51:49.492514 3024 log.go:172] (0xc00077c140) (1) Data frame handling\nI0417 00:51:49.492524 3024 log.go:172] (0xc00077c140) (1) Data frame sent\nI0417 00:51:49.492539 3024 log.go:172] (0xc0007d6b00) (0xc00077c140) Stream removed, broadcasting: 1\nI0417 00:51:49.492563 3024 log.go:172] (0xc0007d6b00) Go away received\nI0417 00:51:49.492885 3024 log.go:172] (0xc0007d6b00) (0xc00077c140) Stream removed, broadcasting: 1\nI0417 00:51:49.492900 3024 log.go:172] (0xc0007d6b00) (0xc0006892c0) Stream removed, broadcasting: 3\nI0417 00:51:49.492907 3024 log.go:172] (0xc0007d6b00) (0xc000546000) Stream removed, broadcasting: 5\n" Apr 17 00:51:49.499: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 17 00:51:49.499: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 17 00:51:49.499: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 17 00:52:29.515: INFO: Deleting all statefulset in ns statefulset-9767 Apr 17 00:52:29.519: INFO: Scaling statefulset ss to 0 Apr 17 00:52:29.529: INFO: Waiting for statefulset status.replicas updated to 0 Apr 17 00:52:29.531: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:52:29.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9767" for this suite. • [SLOW TEST:102.282 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":266,"skipped":4527,"failed":0} SSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:52:29.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token Apr 17 00:52:30.174: INFO: created pod pod-service-account-defaultsa Apr 17 00:52:30.174: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 17 00:52:30.179: INFO: created pod pod-service-account-mountsa Apr 17 00:52:30.179: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 17 00:52:30.209: INFO: created pod pod-service-account-nomountsa Apr 17 00:52:30.209: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 17 00:52:30.222: INFO: created pod pod-service-account-defaultsa-mountspec Apr 17 00:52:30.222: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 17 00:52:30.252: INFO: created pod pod-service-account-mountsa-mountspec Apr 17 00:52:30.252: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 17 00:52:30.305: INFO: created pod pod-service-account-nomountsa-mountspec Apr 17 00:52:30.305: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 17 00:52:30.318: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 17 00:52:30.318: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 17 00:52:30.365: INFO: created pod pod-service-account-mountsa-nomountspec Apr 17 00:52:30.365: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 17 00:52:30.403: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 17 00:52:30.403: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:52:30.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9378" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":275,"completed":267,"skipped":4532,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:52:30.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 17 00:52:30.634: INFO: Waiting up to 5m0s for pod "pod-e5d33996-c119-4925-b8ff-810400243059" in namespace "emptydir-3736" to be "Succeeded or Failed" Apr 17 00:52:30.638: INFO: Pod "pod-e5d33996-c119-4925-b8ff-810400243059": Phase="Pending", Reason="", readiness=false. Elapsed: 3.949319ms Apr 17 00:52:32.642: INFO: Pod "pod-e5d33996-c119-4925-b8ff-810400243059": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007707212s Apr 17 00:52:34.712: INFO: Pod "pod-e5d33996-c119-4925-b8ff-810400243059": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077717985s Apr 17 00:52:36.856: INFO: Pod "pod-e5d33996-c119-4925-b8ff-810400243059": Phase="Pending", Reason="", readiness=false. Elapsed: 6.221723902s Apr 17 00:52:39.191: INFO: Pod "pod-e5d33996-c119-4925-b8ff-810400243059": Phase="Pending", Reason="", readiness=false. Elapsed: 8.557316041s Apr 17 00:52:41.215: INFO: Pod "pod-e5d33996-c119-4925-b8ff-810400243059": Phase="Running", Reason="", readiness=true. Elapsed: 10.580727391s Apr 17 00:52:43.219: INFO: Pod "pod-e5d33996-c119-4925-b8ff-810400243059": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.584975848s STEP: Saw pod success Apr 17 00:52:43.219: INFO: Pod "pod-e5d33996-c119-4925-b8ff-810400243059" satisfied condition "Succeeded or Failed" Apr 17 00:52:43.222: INFO: Trying to get logs from node latest-worker2 pod pod-e5d33996-c119-4925-b8ff-810400243059 container test-container: STEP: delete the pod Apr 17 00:52:43.271: INFO: Waiting for pod pod-e5d33996-c119-4925-b8ff-810400243059 to disappear Apr 17 00:52:43.288: INFO: Pod pod-e5d33996-c119-4925-b8ff-810400243059 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:52:43.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3736" for this suite. • [SLOW TEST:12.803 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":268,"skipped":4557,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:52:43.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 17 00:52:43.392: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e319535c-47e2-42fa-8650-337745e202e1" in namespace "projected-7058" to be "Succeeded or Failed" Apr 17 00:52:43.395: INFO: Pod "downwardapi-volume-e319535c-47e2-42fa-8650-337745e202e1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.309535ms Apr 17 00:52:45.400: INFO: Pod "downwardapi-volume-e319535c-47e2-42fa-8650-337745e202e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008135862s Apr 17 00:52:47.404: INFO: Pod "downwardapi-volume-e319535c-47e2-42fa-8650-337745e202e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012167643s STEP: Saw pod success Apr 17 00:52:47.404: INFO: Pod "downwardapi-volume-e319535c-47e2-42fa-8650-337745e202e1" satisfied condition "Succeeded or Failed" Apr 17 00:52:47.406: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-e319535c-47e2-42fa-8650-337745e202e1 container client-container: STEP: delete the pod Apr 17 00:52:47.450: INFO: Waiting for pod downwardapi-volume-e319535c-47e2-42fa-8650-337745e202e1 to disappear Apr 17 00:52:47.464: INFO: Pod downwardapi-volume-e319535c-47e2-42fa-8650-337745e202e1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:52:47.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7058" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":269,"skipped":4559,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:52:47.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 00:52:47.547: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Apr 17 00:52:50.444: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1205 create -f -' Apr 17 00:52:53.301: INFO: stderr: "" Apr 17 00:52:53.301: INFO: stdout: "e2e-test-crd-publish-openapi-5105-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 17 00:52:53.301: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1205 delete e2e-test-crd-publish-openapi-5105-crds test-foo' Apr 17 00:52:53.411: INFO: stderr: "" Apr 17 00:52:53.411: INFO: stdout: "e2e-test-crd-publish-openapi-5105-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Apr 17 00:52:53.411: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1205 apply -f -' Apr 17 00:52:53.647: INFO: stderr: "" Apr 17 00:52:53.647: INFO: stdout: "e2e-test-crd-publish-openapi-5105-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 17 00:52:53.647: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1205 delete e2e-test-crd-publish-openapi-5105-crds test-foo' Apr 17 00:52:53.762: INFO: stderr: "" Apr 17 00:52:53.762: INFO: stdout: "e2e-test-crd-publish-openapi-5105-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Apr 17 00:52:53.763: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1205 create -f -' Apr 17 00:52:53.991: INFO: rc: 1 Apr 17 00:52:53.991: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1205 apply -f -' Apr 17 00:52:54.289: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Apr 17 00:52:54.289: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1205 create -f -' Apr 17 00:52:54.514: INFO: rc: 1 Apr 17 00:52:54.514: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1205 apply -f -' Apr 17 00:52:54.760: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Apr 17 00:52:54.761: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5105-crds' Apr 17 00:52:54.997: INFO: stderr: "" Apr 17 00:52:54.997: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5105-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Apr 17 00:52:54.998: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5105-crds.metadata' Apr 17 00:52:55.242: INFO: stderr: "" Apr 17 00:52:55.242: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5105-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Apr 17 00:52:55.243: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5105-crds.spec' Apr 17 00:52:55.471: INFO: stderr: "" Apr 17 00:52:55.471: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5105-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Apr 17 00:52:55.471: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5105-crds.spec.bars' Apr 17 00:52:55.709: INFO: stderr: "" Apr 17 00:52:55.709: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5105-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Apr 17 00:52:55.710: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5105-crds.spec.bars2' Apr 17 00:52:55.943: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:52:57.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1205" for this suite. • [SLOW TEST:10.381 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":270,"skipped":4579,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:52:57.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Apr 17 00:52:57.904: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:53:14.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1008" for this suite. • [SLOW TEST:16.702 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":271,"skipped":4596,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:53:14.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting the proxy server Apr 17 00:53:14.685: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:53:14.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1593" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":275,"completed":272,"skipped":4643,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:53:14.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:53:30.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8461" for this suite. • [SLOW TEST:16.148 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":273,"skipped":4668,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:53:30.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 17 00:53:30.970: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:53:36.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9653" for this suite. • [SLOW TEST:5.325 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":274,"skipped":4679,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 00:53:36.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 00:53:36.303: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-062b3b58-09d2-462c-8064-2fac3c5d0f93" in namespace "security-context-test-1770" to be "Succeeded or Failed" Apr 17 00:53:36.306: INFO: Pod "busybox-readonly-false-062b3b58-09d2-462c-8064-2fac3c5d0f93": Phase="Pending", Reason="", readiness=false. Elapsed: 3.184959ms Apr 17 00:53:38.310: INFO: Pod "busybox-readonly-false-062b3b58-09d2-462c-8064-2fac3c5d0f93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007380001s Apr 17 00:53:40.316: INFO: Pod "busybox-readonly-false-062b3b58-09d2-462c-8064-2fac3c5d0f93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013170627s Apr 17 00:53:40.316: INFO: Pod "busybox-readonly-false-062b3b58-09d2-462c-8064-2fac3c5d0f93" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 00:53:40.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1770" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":275,"skipped":4696,"failed":0} SSSSSSSSSSSSSSSSSSSSSApr 17 00:53:40.324: INFO: Running AfterSuite actions on all nodes Apr 17 00:53:40.324: INFO: Running AfterSuite actions on node 1 Apr 17 00:53:40.324: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":275,"completed":275,"skipped":4717,"failed":0} Ran 275 of 4992 Specs in 4555.085 seconds SUCCESS! -- 275 Passed | 0 Failed | 0 Pending | 4717 Skipped PASS