I0422 21:07:28.074823 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0422 21:07:28.075145 6 e2e.go:109] Starting e2e run "ba471c40-6b3c-4ca7-aac7-e12dcd8d8e88" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1587589647 - Will randomize all specs Will run 278 of 4842 specs Apr 22 21:07:28.136: INFO: >>> kubeConfig: /root/.kube/config Apr 22 21:07:28.141: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 22 21:07:28.168: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 22 21:07:28.201: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 22 21:07:28.201: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 22 21:07:28.201: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 22 21:07:28.212: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 22 21:07:28.212: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 22 21:07:28.212: INFO: e2e test version: v1.17.4 Apr 22 21:07:28.213: INFO: kube-apiserver version: v1.17.2 Apr 22 21:07:28.213: INFO: >>> kubeConfig: /root/.kube/config Apr 22 21:07:28.218: INFO: Cluster IP family: ipv4 SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:07:28.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath Apr 22 21:07:28.284: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-fkn4 STEP: Creating a pod to test atomic-volume-subpath Apr 22 21:07:28.298: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-fkn4" in namespace "subpath-9174" to be "success or failure" Apr 22 21:07:28.302: INFO: Pod "pod-subpath-test-configmap-fkn4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.606731ms Apr 22 21:07:30.306: INFO: Pod "pod-subpath-test-configmap-fkn4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008042134s Apr 22 21:07:32.311: INFO: Pod "pod-subpath-test-configmap-fkn4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012189515s Apr 22 21:07:34.316: INFO: Pod "pod-subpath-test-configmap-fkn4": Phase="Running", Reason="", readiness=true. Elapsed: 6.017146025s Apr 22 21:07:36.319: INFO: Pod "pod-subpath-test-configmap-fkn4": Phase="Running", Reason="", readiness=true. Elapsed: 8.021000812s Apr 22 21:07:38.324: INFO: Pod "pod-subpath-test-configmap-fkn4": Phase="Running", Reason="", readiness=true. Elapsed: 10.025182464s Apr 22 21:07:40.328: INFO: Pod "pod-subpath-test-configmap-fkn4": Phase="Running", Reason="", readiness=true. Elapsed: 12.029567105s Apr 22 21:07:42.332: INFO: Pod "pod-subpath-test-configmap-fkn4": Phase="Running", Reason="", readiness=true. Elapsed: 14.034117675s Apr 22 21:07:44.337: INFO: Pod "pod-subpath-test-configmap-fkn4": Phase="Running", Reason="", readiness=true. Elapsed: 16.038988343s Apr 22 21:07:46.341: INFO: Pod "pod-subpath-test-configmap-fkn4": Phase="Running", Reason="", readiness=true. Elapsed: 18.04268622s Apr 22 21:07:48.346: INFO: Pod "pod-subpath-test-configmap-fkn4": Phase="Running", Reason="", readiness=true. Elapsed: 20.047163086s Apr 22 21:07:50.349: INFO: Pod "pod-subpath-test-configmap-fkn4": Phase="Running", Reason="", readiness=true. Elapsed: 22.051036978s Apr 22 21:07:52.511: INFO: Pod "pod-subpath-test-configmap-fkn4": Phase="Running", Reason="", readiness=true. Elapsed: 24.212753995s Apr 22 21:07:54.515: INFO: Pod "pod-subpath-test-configmap-fkn4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.216475487s STEP: Saw pod success Apr 22 21:07:54.515: INFO: Pod "pod-subpath-test-configmap-fkn4" satisfied condition "success or failure" Apr 22 21:07:54.517: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-fkn4 container test-container-subpath-configmap-fkn4: STEP: delete the pod Apr 22 21:07:54.554: INFO: Waiting for pod pod-subpath-test-configmap-fkn4 to disappear Apr 22 21:07:54.570: INFO: Pod pod-subpath-test-configmap-fkn4 no longer exists STEP: Deleting pod pod-subpath-test-configmap-fkn4 Apr 22 21:07:54.570: INFO: Deleting pod "pod-subpath-test-configmap-fkn4" in namespace "subpath-9174" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:07:54.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9174" for this suite. • [SLOW TEST:26.362 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":1,"skipped":5,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:07:54.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0422 21:07:55.723310 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 22 21:07:55.723: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:07:55.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1013" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":2,"skipped":6,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:07:55.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 22 21:07:55.821: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 22 21:07:55.841: INFO: Waiting for terminating namespaces to be deleted... Apr 22 21:07:55.843: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 22 21:07:55.848: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 22 21:07:55.848: INFO: Container kindnet-cni ready: true, restart count 0 Apr 22 21:07:55.848: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 22 21:07:55.848: INFO: Container kube-proxy ready: true, restart count 0 Apr 22 21:07:55.848: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 22 21:07:55.867: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 22 21:07:55.867: INFO: Container kube-hunter ready: false, restart count 0 Apr 22 21:07:55.867: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 22 21:07:55.867: INFO: Container kindnet-cni ready: true, restart count 0 Apr 22 21:07:55.867: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 22 21:07:55.867: INFO: Container kube-bench ready: false, restart count 0 Apr 22 21:07:55.867: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 22 21:07:55.867: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-47ad8d79-8063-4e23-ac20-d688af18d9d0 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-47ad8d79-8063-4e23-ac20-d688af18d9d0 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-47ad8d79-8063-4e23-ac20-d688af18d9d0 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:08:03.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-62" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:8.264 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":3,"skipped":36,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:08:03.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-9756ec95-761a-4239-a95f-e625883d428a in namespace container-probe-2566 Apr 22 21:08:08.152: INFO: Started pod liveness-9756ec95-761a-4239-a95f-e625883d428a in namespace container-probe-2566 STEP: checking the pod's current state and verifying that restartCount is present Apr 22 21:08:08.155: INFO: Initial restart count of pod liveness-9756ec95-761a-4239-a95f-e625883d428a is 0 Apr 22 21:08:24.191: INFO: Restart count of pod container-probe-2566/liveness-9756ec95-761a-4239-a95f-e625883d428a is now 1 (16.036024703s elapsed) Apr 22 21:08:44.231: INFO: Restart count of pod container-probe-2566/liveness-9756ec95-761a-4239-a95f-e625883d428a is now 2 (36.076027873s elapsed) Apr 22 21:09:04.272: INFO: Restart count of pod container-probe-2566/liveness-9756ec95-761a-4239-a95f-e625883d428a is now 3 (56.116977414s elapsed) Apr 22 21:09:25.718: INFO: Restart count of pod container-probe-2566/liveness-9756ec95-761a-4239-a95f-e625883d428a is now 4 (1m17.562574301s elapsed) Apr 22 21:10:37.899: INFO: Restart count of pod container-probe-2566/liveness-9756ec95-761a-4239-a95f-e625883d428a is now 5 (2m29.744022231s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:10:37.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2566" for this suite. • [SLOW TEST:153.965 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":4,"skipped":45,"failed":0} SSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:10:37.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-64a3d3ab-4d47-496c-bcc5-42b7843fb808 Apr 22 21:10:38.044: INFO: Pod name my-hostname-basic-64a3d3ab-4d47-496c-bcc5-42b7843fb808: Found 0 pods out of 1 Apr 22 21:10:43.057: INFO: Pod name my-hostname-basic-64a3d3ab-4d47-496c-bcc5-42b7843fb808: Found 1 pods out of 1 Apr 22 21:10:43.057: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-64a3d3ab-4d47-496c-bcc5-42b7843fb808" are running Apr 22 21:10:43.065: INFO: Pod "my-hostname-basic-64a3d3ab-4d47-496c-bcc5-42b7843fb808-vtkpc" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-22 21:10:38 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-22 21:10:40 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-22 21:10:40 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-22 21:10:38 +0000 UTC Reason: Message:}]) Apr 22 21:10:43.066: INFO: Trying to dial the pod Apr 22 21:10:48.077: INFO: Controller my-hostname-basic-64a3d3ab-4d47-496c-bcc5-42b7843fb808: Got expected result from replica 1 [my-hostname-basic-64a3d3ab-4d47-496c-bcc5-42b7843fb808-vtkpc]: "my-hostname-basic-64a3d3ab-4d47-496c-bcc5-42b7843fb808-vtkpc", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:10:48.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7689" for this suite. • [SLOW TEST:10.130 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":5,"skipped":51,"failed":0} [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:10:48.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 22 21:10:48.146: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ba9e15a1-7fd9-498b-b304-43124b447c6f" in namespace "projected-3397" to be "success or failure" Apr 22 21:10:48.149: INFO: Pod "downwardapi-volume-ba9e15a1-7fd9-498b-b304-43124b447c6f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.377456ms Apr 22 21:10:50.168: INFO: Pod "downwardapi-volume-ba9e15a1-7fd9-498b-b304-43124b447c6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021816134s Apr 22 21:10:52.171: INFO: Pod "downwardapi-volume-ba9e15a1-7fd9-498b-b304-43124b447c6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025353769s STEP: Saw pod success Apr 22 21:10:52.171: INFO: Pod "downwardapi-volume-ba9e15a1-7fd9-498b-b304-43124b447c6f" satisfied condition "success or failure" Apr 22 21:10:52.174: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-ba9e15a1-7fd9-498b-b304-43124b447c6f container client-container: STEP: delete the pod Apr 22 21:10:52.220: INFO: Waiting for pod downwardapi-volume-ba9e15a1-7fd9-498b-b304-43124b447c6f to disappear Apr 22 21:10:52.233: INFO: Pod downwardapi-volume-ba9e15a1-7fd9-498b-b304-43124b447c6f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:10:52.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3397" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":51,"failed":0} ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:10:52.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-3227 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-3227 I0422 21:10:52.403697 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-3227, replica count: 2 I0422 21:10:55.454144 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 21:10:58.454383 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 22 21:10:58.454: INFO: Creating new exec pod Apr 22 21:11:03.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3227 execpodbmxpc -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 22 21:11:06.043: INFO: stderr: "I0422 21:11:05.949544 30 log.go:172] (0xc0000f4e70) (0xc0005b4820) Create stream\nI0422 21:11:05.949598 30 log.go:172] (0xc0000f4e70) (0xc0005b4820) Stream added, broadcasting: 1\nI0422 21:11:05.951995 30 log.go:172] (0xc0000f4e70) Reply frame received for 1\nI0422 21:11:05.952039 30 log.go:172] (0xc0000f4e70) (0xc0007a4a00) Create stream\nI0422 21:11:05.952056 30 log.go:172] (0xc0000f4e70) (0xc0007a4a00) Stream added, broadcasting: 3\nI0422 21:11:05.953063 30 log.go:172] (0xc0000f4e70) Reply frame received for 3\nI0422 21:11:05.953105 30 log.go:172] (0xc0000f4e70) (0xc0007a4aa0) Create stream\nI0422 21:11:05.953247 30 log.go:172] (0xc0000f4e70) (0xc0007a4aa0) Stream added, broadcasting: 5\nI0422 21:11:05.954346 30 log.go:172] (0xc0000f4e70) Reply frame received for 5\nI0422 21:11:06.036723 30 log.go:172] (0xc0000f4e70) Data frame received for 5\nI0422 21:11:06.036762 30 log.go:172] (0xc0007a4aa0) (5) Data frame handling\nI0422 21:11:06.036787 30 log.go:172] (0xc0007a4aa0) (5) Data frame sent\nI0422 21:11:06.036799 30 log.go:172] (0xc0000f4e70) Data frame received for 5\nI0422 21:11:06.036809 30 log.go:172] (0xc0007a4aa0) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0422 21:11:06.036836 30 log.go:172] (0xc0007a4aa0) (5) Data frame sent\nI0422 21:11:06.037284 30 log.go:172] (0xc0000f4e70) Data frame received for 5\nI0422 21:11:06.037318 30 log.go:172] (0xc0007a4aa0) (5) Data frame handling\nI0422 21:11:06.037552 30 log.go:172] (0xc0000f4e70) Data frame received for 3\nI0422 21:11:06.037571 30 log.go:172] (0xc0007a4a00) (3) Data frame handling\nI0422 21:11:06.039078 30 log.go:172] (0xc0000f4e70) Data frame received for 1\nI0422 21:11:06.039095 30 log.go:172] (0xc0005b4820) (1) Data frame handling\nI0422 21:11:06.039107 30 log.go:172] (0xc0005b4820) (1) Data frame sent\nI0422 21:11:06.039115 30 log.go:172] (0xc0000f4e70) (0xc0005b4820) Stream removed, broadcasting: 1\nI0422 21:11:06.039205 30 log.go:172] (0xc0000f4e70) Go away received\nI0422 21:11:06.039381 30 log.go:172] (0xc0000f4e70) (0xc0005b4820) Stream removed, broadcasting: 1\nI0422 21:11:06.039393 30 log.go:172] (0xc0000f4e70) (0xc0007a4a00) Stream removed, broadcasting: 3\nI0422 21:11:06.039399 30 log.go:172] (0xc0000f4e70) (0xc0007a4aa0) Stream removed, broadcasting: 5\n" Apr 22 21:11:06.043: INFO: stdout: "" Apr 22 21:11:06.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3227 execpodbmxpc -- /bin/sh -x -c nc -zv -t -w 2 10.105.108.103 80' Apr 22 21:11:06.283: INFO: stderr: "I0422 21:11:06.178059 62 log.go:172] (0xc000105600) (0xc00065fb80) Create stream\nI0422 21:11:06.178117 62 log.go:172] (0xc000105600) (0xc00065fb80) Stream added, broadcasting: 1\nI0422 21:11:06.181577 62 log.go:172] (0xc000105600) Reply frame received for 1\nI0422 21:11:06.181628 62 log.go:172] (0xc000105600) (0xc00065fd60) Create stream\nI0422 21:11:06.181643 62 log.go:172] (0xc000105600) (0xc00065fd60) Stream added, broadcasting: 3\nI0422 21:11:06.182638 62 log.go:172] (0xc000105600) Reply frame received for 3\nI0422 21:11:06.182672 62 log.go:172] (0xc000105600) (0xc000a84000) Create stream\nI0422 21:11:06.182684 62 log.go:172] (0xc000105600) (0xc000a84000) Stream added, broadcasting: 5\nI0422 21:11:06.183840 62 log.go:172] (0xc000105600) Reply frame received for 5\nI0422 21:11:06.261588 62 log.go:172] (0xc000105600) Data frame received for 3\nI0422 21:11:06.261743 62 log.go:172] (0xc00065fd60) (3) Data frame handling\nI0422 21:11:06.262097 62 log.go:172] (0xc000105600) Data frame received for 5\nI0422 21:11:06.262180 62 log.go:172] (0xc000a84000) (5) Data frame handling\nI0422 21:11:06.262263 62 log.go:172] (0xc000a84000) (5) Data frame sent\nI0422 21:11:06.262330 62 log.go:172] (0xc000105600) Data frame received for 5\nI0422 21:11:06.262405 62 log.go:172] (0xc000a84000) (5) Data frame handling\n+ nc -zv -t -w 2 10.105.108.103 80\nConnection to 10.105.108.103 80 port [tcp/http] succeeded!\nI0422 21:11:06.278321 62 log.go:172] (0xc000105600) Data frame received for 1\nI0422 21:11:06.278344 62 log.go:172] (0xc00065fb80) (1) Data frame handling\nI0422 21:11:06.278361 62 log.go:172] (0xc00065fb80) (1) Data frame sent\nI0422 21:11:06.278380 62 log.go:172] (0xc000105600) (0xc00065fb80) Stream removed, broadcasting: 1\nI0422 21:11:06.278393 62 log.go:172] (0xc000105600) Go away received\nI0422 21:11:06.278939 62 log.go:172] (0xc000105600) (0xc00065fb80) Stream removed, broadcasting: 1\nI0422 21:11:06.278974 62 log.go:172] (0xc000105600) (0xc00065fd60) Stream removed, broadcasting: 3\nI0422 21:11:06.278990 62 log.go:172] (0xc000105600) (0xc000a84000) Stream removed, broadcasting: 5\n" Apr 22 21:11:06.283: INFO: stdout: "" Apr 22 21:11:06.283: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:11:06.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3227" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:14.086 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":7,"skipped":51,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:11:06.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 22 21:11:06.407: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7150 /api/v1/namespaces/watch-7150/configmaps/e2e-watch-test-configmap-a 6fb263eb-903b-4f88-9452-fca487c8324e 10214866 0 2020-04-22 21:11:06 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 22 21:11:06.407: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7150 /api/v1/namespaces/watch-7150/configmaps/e2e-watch-test-configmap-a 6fb263eb-903b-4f88-9452-fca487c8324e 10214866 0 2020-04-22 21:11:06 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Apr 22 21:11:16.416: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7150 /api/v1/namespaces/watch-7150/configmaps/e2e-watch-test-configmap-a 6fb263eb-903b-4f88-9452-fca487c8324e 10214923 0 2020-04-22 21:11:06 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 22 21:11:16.416: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7150 /api/v1/namespaces/watch-7150/configmaps/e2e-watch-test-configmap-a 6fb263eb-903b-4f88-9452-fca487c8324e 10214923 0 2020-04-22 21:11:06 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Apr 22 21:11:26.425: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7150 /api/v1/namespaces/watch-7150/configmaps/e2e-watch-test-configmap-a 6fb263eb-903b-4f88-9452-fca487c8324e 10214953 0 2020-04-22 21:11:06 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 22 21:11:26.425: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7150 /api/v1/namespaces/watch-7150/configmaps/e2e-watch-test-configmap-a 6fb263eb-903b-4f88-9452-fca487c8324e 10214953 0 2020-04-22 21:11:06 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Apr 22 21:11:36.432: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7150 /api/v1/namespaces/watch-7150/configmaps/e2e-watch-test-configmap-a 6fb263eb-903b-4f88-9452-fca487c8324e 10214983 0 2020-04-22 21:11:06 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 22 21:11:36.433: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7150 /api/v1/namespaces/watch-7150/configmaps/e2e-watch-test-configmap-a 6fb263eb-903b-4f88-9452-fca487c8324e 10214983 0 2020-04-22 21:11:06 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 22 21:11:46.440: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7150 /api/v1/namespaces/watch-7150/configmaps/e2e-watch-test-configmap-b e80137ba-cd0c-48b6-b43a-ebf5672fbff8 10215013 0 2020-04-22 21:11:46 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 22 21:11:46.441: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7150 /api/v1/namespaces/watch-7150/configmaps/e2e-watch-test-configmap-b e80137ba-cd0c-48b6-b43a-ebf5672fbff8 10215013 0 2020-04-22 21:11:46 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Apr 22 21:11:56.448: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7150 /api/v1/namespaces/watch-7150/configmaps/e2e-watch-test-configmap-b e80137ba-cd0c-48b6-b43a-ebf5672fbff8 10215043 0 2020-04-22 21:11:46 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 22 21:11:56.448: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7150 /api/v1/namespaces/watch-7150/configmaps/e2e-watch-test-configmap-b e80137ba-cd0c-48b6-b43a-ebf5672fbff8 10215043 0 2020-04-22 21:11:46 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:12:06.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7150" for this suite. • [SLOW TEST:60.114 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":8,"skipped":61,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:12:06.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 22 21:12:06.525: INFO: Waiting up to 5m0s for pod "downwardapi-volume-33e86947-068a-486d-b6e5-675736f03d9d" in namespace "downward-api-4106" to be "success or failure" Apr 22 21:12:06.539: INFO: Pod "downwardapi-volume-33e86947-068a-486d-b6e5-675736f03d9d": Phase="Pending", Reason="", readiness=false. Elapsed: 13.490147ms Apr 22 21:12:08.542: INFO: Pod "downwardapi-volume-33e86947-068a-486d-b6e5-675736f03d9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016887345s Apr 22 21:12:10.547: INFO: Pod "downwardapi-volume-33e86947-068a-486d-b6e5-675736f03d9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021408369s STEP: Saw pod success Apr 22 21:12:10.547: INFO: Pod "downwardapi-volume-33e86947-068a-486d-b6e5-675736f03d9d" satisfied condition "success or failure" Apr 22 21:12:10.550: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-33e86947-068a-486d-b6e5-675736f03d9d container client-container: STEP: delete the pod Apr 22 21:12:10.578: INFO: Waiting for pod downwardapi-volume-33e86947-068a-486d-b6e5-675736f03d9d to disappear Apr 22 21:12:10.582: INFO: Pod downwardapi-volume-33e86947-068a-486d-b6e5-675736f03d9d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:12:10.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4106" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":9,"skipped":63,"failed":0} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:12:10.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 22 21:12:10.708: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 22 21:12:10.738: INFO: Number of nodes with available pods: 0 Apr 22 21:12:10.738: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 22 21:12:10.836: INFO: Number of nodes with available pods: 0 Apr 22 21:12:10.836: INFO: Node jerma-worker is running more than one daemon pod Apr 22 21:12:11.841: INFO: Number of nodes with available pods: 0 Apr 22 21:12:11.841: INFO: Node jerma-worker is running more than one daemon pod Apr 22 21:12:12.843: INFO: Number of nodes with available pods: 0 Apr 22 21:12:12.843: INFO: Node jerma-worker is running more than one daemon pod Apr 22 21:12:13.841: INFO: Number of nodes with available pods: 1 Apr 22 21:12:13.841: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 22 21:12:13.872: INFO: Number of nodes with available pods: 1 Apr 22 21:12:13.872: INFO: Number of running nodes: 0, number of available pods: 1 Apr 22 21:12:14.875: INFO: Number of nodes with available pods: 0 Apr 22 21:12:14.875: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 22 21:12:14.887: INFO: Number of nodes with available pods: 0 Apr 22 21:12:14.887: INFO: Node jerma-worker is running more than one daemon pod Apr 22 21:12:15.963: INFO: Number of nodes with available pods: 0 Apr 22 21:12:15.963: INFO: Node jerma-worker is running more than one daemon pod Apr 22 21:12:16.891: INFO: Number of nodes with available pods: 0 Apr 22 21:12:16.892: INFO: Node jerma-worker is running more than one daemon pod Apr 22 21:12:17.906: INFO: Number of nodes with available pods: 0 Apr 22 21:12:17.906: INFO: Node jerma-worker is running more than one daemon pod Apr 22 21:12:18.892: INFO: Number of nodes with available pods: 0 Apr 22 21:12:18.892: INFO: Node jerma-worker is running more than one daemon pod Apr 22 21:12:19.892: INFO: Number of nodes with available pods: 0 Apr 22 21:12:19.892: INFO: Node jerma-worker is running more than one daemon pod Apr 22 21:12:20.892: INFO: Number of nodes with available pods: 1 Apr 22 21:12:20.892: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2253, will wait for the garbage collector to delete the pods Apr 22 21:12:20.957: INFO: Deleting DaemonSet.extensions daemon-set took: 6.749374ms Apr 22 21:12:21.258: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.258439ms Apr 22 21:12:29.260: INFO: Number of nodes with available pods: 0 Apr 22 21:12:29.260: INFO: Number of running nodes: 0, number of available pods: 0 Apr 22 21:12:29.265: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2253/daemonsets","resourceVersion":"10215219"},"items":null} Apr 22 21:12:29.280: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2253/pods","resourceVersion":"10215219"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:12:29.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2253" for this suite. • [SLOW TEST:18.744 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":10,"skipped":67,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:12:29.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 22 21:12:29.447: INFO: Waiting up to 5m0s for pod "downwardapi-volume-10f3c70b-6135-4c88-9845-10789a7b68ac" in namespace "downward-api-7942" to be "success or failure" Apr 22 21:12:29.465: INFO: Pod "downwardapi-volume-10f3c70b-6135-4c88-9845-10789a7b68ac": Phase="Pending", Reason="", readiness=false. Elapsed: 18.160033ms Apr 22 21:12:31.568: INFO: Pod "downwardapi-volume-10f3c70b-6135-4c88-9845-10789a7b68ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120289289s Apr 22 21:12:33.572: INFO: Pod "downwardapi-volume-10f3c70b-6135-4c88-9845-10789a7b68ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.124325185s STEP: Saw pod success Apr 22 21:12:33.572: INFO: Pod "downwardapi-volume-10f3c70b-6135-4c88-9845-10789a7b68ac" satisfied condition "success or failure" Apr 22 21:12:33.575: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-10f3c70b-6135-4c88-9845-10789a7b68ac container client-container: STEP: delete the pod Apr 22 21:12:33.612: INFO: Waiting for pod downwardapi-volume-10f3c70b-6135-4c88-9845-10789a7b68ac to disappear Apr 22 21:12:33.619: INFO: Pod downwardapi-volume-10f3c70b-6135-4c88-9845-10789a7b68ac no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:12:33.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7942" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":11,"skipped":82,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:12:33.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:12:33.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-8765" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":12,"skipped":116,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:12:33.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-4127/secret-test-280444ec-1aeb-4f17-9e9e-157262a9b10f STEP: Creating a pod to test consume secrets Apr 22 21:12:33.753: INFO: Waiting up to 5m0s for pod "pod-configmaps-36df2a8e-cf7d-4306-b832-5db3dfcdc645" in namespace "secrets-4127" to be "success or failure" Apr 22 21:12:33.757: INFO: Pod "pod-configmaps-36df2a8e-cf7d-4306-b832-5db3dfcdc645": Phase="Pending", Reason="", readiness=false. Elapsed: 3.439325ms Apr 22 21:12:35.771: INFO: Pod "pod-configmaps-36df2a8e-cf7d-4306-b832-5db3dfcdc645": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017416817s Apr 22 21:12:37.775: INFO: Pod "pod-configmaps-36df2a8e-cf7d-4306-b832-5db3dfcdc645": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021253646s STEP: Saw pod success Apr 22 21:12:37.775: INFO: Pod "pod-configmaps-36df2a8e-cf7d-4306-b832-5db3dfcdc645" satisfied condition "success or failure" Apr 22 21:12:37.778: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-36df2a8e-cf7d-4306-b832-5db3dfcdc645 container env-test: STEP: delete the pod Apr 22 21:12:37.800: INFO: Waiting for pod pod-configmaps-36df2a8e-cf7d-4306-b832-5db3dfcdc645 to disappear Apr 22 21:12:37.805: INFO: Pod pod-configmaps-36df2a8e-cf7d-4306-b832-5db3dfcdc645 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:12:37.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4127" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":13,"skipped":146,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:12:37.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1357 STEP: creating an pod Apr 22 21:12:37.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-8211 -- logs-generator --log-lines-total 100 --run-duration 20s' Apr 22 21:12:38.002: INFO: stderr: "" Apr 22 21:12:38.002: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. Apr 22 21:12:38.003: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Apr 22 21:12:38.003: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-8211" to be "running and ready, or succeeded" Apr 22 21:12:38.010: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 7.812936ms Apr 22 21:12:40.014: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011474006s Apr 22 21:12:42.018: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.01568349s Apr 22 21:12:42.018: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Apr 22 21:12:42.018: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Apr 22 21:12:42.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8211' Apr 22 21:12:42.156: INFO: stderr: "" Apr 22 21:12:42.156: INFO: stdout: "I0422 21:12:40.226858 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/6nb 472\nI0422 21:12:40.427097 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/6gd 232\nI0422 21:12:40.627000 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/g7w 247\nI0422 21:12:40.827049 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/qck 330\nI0422 21:12:41.027006 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/m6m 579\nI0422 21:12:41.226970 1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/kkj7 231\nI0422 21:12:41.427049 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/dr2t 295\nI0422 21:12:41.626999 1 logs_generator.go:76] 7 POST /api/v1/namespaces/kube-system/pods/chbl 559\nI0422 21:12:41.827022 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/stln 495\nI0422 21:12:42.027024 1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/nl7c 506\n" STEP: limiting log lines Apr 22 21:12:42.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8211 --tail=1' Apr 22 21:12:42.269: INFO: stderr: "" Apr 22 21:12:42.269: INFO: stdout: "I0422 21:12:42.227000 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/prc 483\n" Apr 22 21:12:42.270: INFO: got output "I0422 21:12:42.227000 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/prc 483\n" STEP: limiting log bytes Apr 22 21:12:42.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8211 --limit-bytes=1' Apr 22 21:12:42.377: INFO: stderr: "" Apr 22 21:12:42.377: INFO: stdout: "I" Apr 22 21:12:42.377: INFO: got output "I" STEP: exposing timestamps Apr 22 21:12:42.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8211 --tail=1 --timestamps' Apr 22 21:12:42.486: INFO: stderr: "" Apr 22 21:12:42.486: INFO: stdout: "2020-04-22T21:12:42.427169371Z I0422 21:12:42.427012 1 logs_generator.go:76] 11 POST /api/v1/namespaces/kube-system/pods/kj8 355\n" Apr 22 21:12:42.486: INFO: got output "2020-04-22T21:12:42.427169371Z I0422 21:12:42.427012 1 logs_generator.go:76] 11 POST /api/v1/namespaces/kube-system/pods/kj8 355\n" STEP: restricting to a time range Apr 22 21:12:44.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8211 --since=1s' Apr 22 21:12:45.106: INFO: stderr: "" Apr 22 21:12:45.106: INFO: stdout: "I0422 21:12:44.227009 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/swwf 418\nI0422 21:12:44.426996 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/default/pods/h7mx 336\nI0422 21:12:44.627024 1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/nbr 203\nI0422 21:12:44.827025 1 logs_generator.go:76] 23 POST /api/v1/namespaces/default/pods/9kv 324\nI0422 21:12:45.027049 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/default/pods/5pg 417\n" Apr 22 21:12:45.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8211 --since=24h' Apr 22 21:12:45.226: INFO: stderr: "" Apr 22 21:12:45.226: INFO: stdout: "I0422 21:12:40.226858 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/6nb 472\nI0422 21:12:40.427097 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/6gd 232\nI0422 21:12:40.627000 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/g7w 247\nI0422 21:12:40.827049 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/qck 330\nI0422 21:12:41.027006 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/m6m 579\nI0422 21:12:41.226970 1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/kkj7 231\nI0422 21:12:41.427049 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/dr2t 295\nI0422 21:12:41.626999 1 logs_generator.go:76] 7 POST /api/v1/namespaces/kube-system/pods/chbl 559\nI0422 21:12:41.827022 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/stln 495\nI0422 21:12:42.027024 1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/nl7c 506\nI0422 21:12:42.227000 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/prc 483\nI0422 21:12:42.427012 1 logs_generator.go:76] 11 POST /api/v1/namespaces/kube-system/pods/kj8 355\nI0422 21:12:42.627030 1 logs_generator.go:76] 12 GET /api/v1/namespaces/ns/pods/vwv 472\nI0422 21:12:42.827018 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/nf2 335\nI0422 21:12:43.027009 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/kube-system/pods/qrcm 579\nI0422 21:12:43.227017 1 logs_generator.go:76] 15 POST /api/v1/namespaces/kube-system/pods/kf9 289\nI0422 21:12:43.427029 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/kube-system/pods/qkw 509\nI0422 21:12:43.627005 1 logs_generator.go:76] 17 POST /api/v1/namespaces/default/pods/pld4 569\nI0422 21:12:43.827000 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/24nf 408\nI0422 21:12:44.027010 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/default/pods/nqks 249\nI0422 21:12:44.227009 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/swwf 418\nI0422 21:12:44.426996 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/default/pods/h7mx 336\nI0422 21:12:44.627024 1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/nbr 203\nI0422 21:12:44.827025 1 logs_generator.go:76] 23 POST /api/v1/namespaces/default/pods/9kv 324\nI0422 21:12:45.027049 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/default/pods/5pg 417\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 Apr 22 21:12:45.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-8211' Apr 22 21:12:59.271: INFO: stderr: "" Apr 22 21:12:59.271: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:12:59.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8211" for this suite. • [SLOW TEST:21.457 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1353 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":14,"skipped":149,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:12:59.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:13:10.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6859" for this suite. • [SLOW TEST:11.136 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":15,"skipped":169,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:13:10.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 22 21:13:11.359: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 22 21:13:13.431: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723186791, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723186791, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723186791, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723186791, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 21:13:15.435: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723186791, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723186791, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723186791, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723186791, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 22 21:13:18.464: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:13:18.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-796" for this suite. STEP: Destroying namespace "webhook-796-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.191 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":16,"skipped":178,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:13:18.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7200.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-7200.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7200.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7200.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-7200.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7200.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 22 21:13:24.821: INFO: DNS probes using dns-7200/dns-test-58157a61-08cb-47cb-b5e0-8c4eb7d96111 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:13:24.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7200" for this suite. • [SLOW TEST:6.327 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":17,"skipped":188,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:13:24.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-7657 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Apr 22 21:13:25.285: INFO: Found 0 stateful pods, waiting for 3 Apr 22 21:13:35.290: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 22 21:13:35.290: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 22 21:13:35.290: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 22 21:13:35.318: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Apr 22 21:13:45.387: INFO: Updating stateful set ss2 Apr 22 21:13:45.414: INFO: Waiting for Pod statefulset-7657/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Apr 22 21:13:55.538: INFO: Found 2 stateful pods, waiting for 3 Apr 22 21:14:05.543: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 22 21:14:05.543: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 22 21:14:05.543: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Apr 22 21:14:05.565: INFO: Updating stateful set ss2 Apr 22 21:14:05.585: INFO: Waiting for Pod statefulset-7657/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 22 21:14:15.611: INFO: Updating stateful set ss2 Apr 22 21:14:15.628: INFO: Waiting for StatefulSet statefulset-7657/ss2 to complete update Apr 22 21:14:15.628: INFO: Waiting for Pod statefulset-7657/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 22 21:14:25.636: INFO: Deleting all statefulset in ns statefulset-7657 Apr 22 21:14:25.639: INFO: Scaling statefulset ss2 to 0 Apr 22 21:14:45.795: INFO: Waiting for statefulset status.replicas updated to 0 Apr 22 21:14:45.798: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:14:45.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7657" for this suite. • [SLOW TEST:80.879 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":18,"skipped":230,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:14:45.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 22 21:14:45.897: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-5460 I0422 21:14:45.917875 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-5460, replica count: 1 I0422 21:14:46.968267 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 21:14:47.968575 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 21:14:48.968782 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 22 21:14:49.112: INFO: Created: latency-svc-4d7cr Apr 22 21:14:49.127: INFO: Got endpoints: latency-svc-4d7cr [58.679228ms] Apr 22 21:14:49.173: INFO: Created: latency-svc-w4bsk Apr 22 21:14:49.185: INFO: Got endpoints: latency-svc-w4bsk [57.563059ms] Apr 22 21:14:49.242: INFO: Created: latency-svc-6pt4x Apr 22 21:14:49.245: INFO: Got endpoints: latency-svc-6pt4x [117.163266ms] Apr 22 21:14:49.278: INFO: Created: latency-svc-4ft5m Apr 22 21:14:49.302: INFO: Got endpoints: latency-svc-4ft5m [174.907321ms] Apr 22 21:14:49.334: INFO: Created: latency-svc-kmfvw Apr 22 21:14:49.396: INFO: Got endpoints: latency-svc-kmfvw [268.826619ms] Apr 22 21:14:49.834: INFO: Created: latency-svc-gdprm Apr 22 21:14:49.838: INFO: Got endpoints: latency-svc-gdprm [710.559971ms] Apr 22 21:14:50.165: INFO: Created: latency-svc-fmn7v Apr 22 21:14:50.170: INFO: Got endpoints: latency-svc-fmn7v [1.042693646s] Apr 22 21:14:50.197: INFO: Created: latency-svc-659qj Apr 22 21:14:50.218: INFO: Got endpoints: latency-svc-659qj [1.090711354s] Apr 22 21:14:50.300: INFO: Created: latency-svc-tsxnq Apr 22 21:14:50.304: INFO: Got endpoints: latency-svc-tsxnq [1.176216472s] Apr 22 21:14:50.327: INFO: Created: latency-svc-ml5bz Apr 22 21:14:50.343: INFO: Got endpoints: latency-svc-ml5bz [1.215476797s] Apr 22 21:14:50.365: INFO: Created: latency-svc-295qm Apr 22 21:14:50.373: INFO: Got endpoints: latency-svc-295qm [1.245769485s] Apr 22 21:14:50.396: INFO: Created: latency-svc-pqlkv Apr 22 21:14:50.443: INFO: Got endpoints: latency-svc-pqlkv [1.31563069s] Apr 22 21:14:50.452: INFO: Created: latency-svc-6f5zz Apr 22 21:14:50.471: INFO: Got endpoints: latency-svc-6f5zz [1.343650262s] Apr 22 21:14:50.495: INFO: Created: latency-svc-xfxn4 Apr 22 21:14:50.508: INFO: Got endpoints: latency-svc-xfxn4 [1.379820858s] Apr 22 21:14:50.531: INFO: Created: latency-svc-zmlb9 Apr 22 21:14:50.593: INFO: Got endpoints: latency-svc-zmlb9 [1.465680564s] Apr 22 21:14:50.595: INFO: Created: latency-svc-g4z8d Apr 22 21:14:50.604: INFO: Got endpoints: latency-svc-g4z8d [1.476224769s] Apr 22 21:14:50.629: INFO: Created: latency-svc-pk5vv Apr 22 21:14:50.646: INFO: Got endpoints: latency-svc-pk5vv [1.461419552s] Apr 22 21:14:50.743: INFO: Created: latency-svc-kxmlr Apr 22 21:14:50.747: INFO: Got endpoints: latency-svc-kxmlr [1.502018047s] Apr 22 21:14:50.776: INFO: Created: latency-svc-sq6nr Apr 22 21:14:50.789: INFO: Got endpoints: latency-svc-sq6nr [1.486624949s] Apr 22 21:14:50.809: INFO: Created: latency-svc-mks7s Apr 22 21:14:50.819: INFO: Got endpoints: latency-svc-mks7s [1.423107889s] Apr 22 21:14:50.838: INFO: Created: latency-svc-76x95 Apr 22 21:14:50.886: INFO: Got endpoints: latency-svc-76x95 [1.048145093s] Apr 22 21:14:50.921: INFO: Created: latency-svc-9dj64 Apr 22 21:14:50.935: INFO: Got endpoints: latency-svc-9dj64 [764.882899ms] Apr 22 21:14:51.025: INFO: Created: latency-svc-ddg5t Apr 22 21:14:51.054: INFO: Got endpoints: latency-svc-ddg5t [836.162516ms] Apr 22 21:14:51.056: INFO: Created: latency-svc-stvhk Apr 22 21:14:51.068: INFO: Got endpoints: latency-svc-stvhk [763.823813ms] Apr 22 21:14:51.113: INFO: Created: latency-svc-j4fhz Apr 22 21:14:51.207: INFO: Got endpoints: latency-svc-j4fhz [864.284783ms] Apr 22 21:14:51.239: INFO: Created: latency-svc-d494g Apr 22 21:14:51.256: INFO: Got endpoints: latency-svc-d494g [882.843276ms] Apr 22 21:14:51.324: INFO: Created: latency-svc-jnvvd Apr 22 21:14:51.328: INFO: Got endpoints: latency-svc-jnvvd [884.232635ms] Apr 22 21:14:51.361: INFO: Created: latency-svc-9mvvk Apr 22 21:14:51.376: INFO: Got endpoints: latency-svc-9mvvk [904.750345ms] Apr 22 21:14:51.403: INFO: Created: latency-svc-r4kgd Apr 22 21:14:51.412: INFO: Got endpoints: latency-svc-r4kgd [904.832053ms] Apr 22 21:14:51.474: INFO: Created: latency-svc-zsmhq Apr 22 21:14:51.515: INFO: Got endpoints: latency-svc-zsmhq [922.081852ms] Apr 22 21:14:51.559: INFO: Created: latency-svc-9tl29 Apr 22 21:14:51.629: INFO: Got endpoints: latency-svc-9tl29 [1.025524592s] Apr 22 21:14:51.632: INFO: Created: latency-svc-s69zm Apr 22 21:14:51.639: INFO: Got endpoints: latency-svc-s69zm [992.434339ms] Apr 22 21:14:51.661: INFO: Created: latency-svc-t89xn Apr 22 21:14:51.689: INFO: Got endpoints: latency-svc-t89xn [941.824625ms] Apr 22 21:14:51.719: INFO: Created: latency-svc-p66xr Apr 22 21:14:51.767: INFO: Got endpoints: latency-svc-p66xr [977.627926ms] Apr 22 21:14:51.775: INFO: Created: latency-svc-wjgbp Apr 22 21:14:51.791: INFO: Got endpoints: latency-svc-wjgbp [972.125532ms] Apr 22 21:14:51.817: INFO: Created: latency-svc-f7qc2 Apr 22 21:14:51.828: INFO: Got endpoints: latency-svc-f7qc2 [941.017393ms] Apr 22 21:14:51.859: INFO: Created: latency-svc-qxsn9 Apr 22 21:14:51.893: INFO: Got endpoints: latency-svc-qxsn9 [957.453709ms] Apr 22 21:14:51.922: INFO: Created: latency-svc-t4m6j Apr 22 21:14:51.959: INFO: Got endpoints: latency-svc-t4m6j [904.502248ms] Apr 22 21:14:52.038: INFO: Created: latency-svc-pbhm2 Apr 22 21:14:52.048: INFO: Got endpoints: latency-svc-pbhm2 [980.684136ms] Apr 22 21:14:52.069: INFO: Created: latency-svc-j8lw5 Apr 22 21:14:52.085: INFO: Got endpoints: latency-svc-j8lw5 [877.628429ms] Apr 22 21:14:52.110: INFO: Created: latency-svc-95vml Apr 22 21:14:52.127: INFO: Got endpoints: latency-svc-95vml [870.633694ms] Apr 22 21:14:52.175: INFO: Created: latency-svc-gkqz4 Apr 22 21:14:52.181: INFO: Got endpoints: latency-svc-gkqz4 [853.566423ms] Apr 22 21:14:52.205: INFO: Created: latency-svc-mfn7c Apr 22 21:14:52.218: INFO: Got endpoints: latency-svc-mfn7c [841.390961ms] Apr 22 21:14:52.242: INFO: Created: latency-svc-kjw84 Apr 22 21:14:52.273: INFO: Got endpoints: latency-svc-kjw84 [860.371107ms] Apr 22 21:14:52.330: INFO: Created: latency-svc-lbbsh Apr 22 21:14:52.355: INFO: Got endpoints: latency-svc-lbbsh [839.621001ms] Apr 22 21:14:52.391: INFO: Created: latency-svc-wsc4c Apr 22 21:14:52.404: INFO: Got endpoints: latency-svc-wsc4c [774.479509ms] Apr 22 21:14:52.474: INFO: Created: latency-svc-ftctk Apr 22 21:14:52.489: INFO: Got endpoints: latency-svc-ftctk [849.665673ms] Apr 22 21:14:52.513: INFO: Created: latency-svc-d8sj2 Apr 22 21:14:52.547: INFO: Got endpoints: latency-svc-d8sj2 [858.048466ms] Apr 22 21:14:52.618: INFO: Created: latency-svc-gkhkf Apr 22 21:14:52.627: INFO: Got endpoints: latency-svc-gkhkf [860.057161ms] Apr 22 21:14:52.656: INFO: Created: latency-svc-srn2b Apr 22 21:14:52.663: INFO: Got endpoints: latency-svc-srn2b [871.064423ms] Apr 22 21:14:52.687: INFO: Created: latency-svc-pjflx Apr 22 21:14:52.700: INFO: Got endpoints: latency-svc-pjflx [872.343111ms] Apr 22 21:14:52.755: INFO: Created: latency-svc-dl8vq Apr 22 21:14:52.766: INFO: Got endpoints: latency-svc-dl8vq [872.671941ms] Apr 22 21:14:52.786: INFO: Created: latency-svc-tf8br Apr 22 21:14:52.802: INFO: Got endpoints: latency-svc-tf8br [843.019559ms] Apr 22 21:14:52.846: INFO: Created: latency-svc-m6m2x Apr 22 21:14:52.899: INFO: Got endpoints: latency-svc-m6m2x [850.992913ms] Apr 22 21:14:52.933: INFO: Created: latency-svc-ghncj Apr 22 21:14:52.947: INFO: Got endpoints: latency-svc-ghncj [861.92825ms] Apr 22 21:14:52.969: INFO: Created: latency-svc-mvkdq Apr 22 21:14:52.982: INFO: Got endpoints: latency-svc-mvkdq [855.441488ms] Apr 22 21:14:53.051: INFO: Created: latency-svc-rn6jp Apr 22 21:14:53.067: INFO: Got endpoints: latency-svc-rn6jp [885.374834ms] Apr 22 21:14:53.093: INFO: Created: latency-svc-z5j58 Apr 22 21:14:53.103: INFO: Got endpoints: latency-svc-z5j58 [885.302821ms] Apr 22 21:14:53.128: INFO: Created: latency-svc-bc722 Apr 22 21:14:53.139: INFO: Got endpoints: latency-svc-bc722 [866.384852ms] Apr 22 21:14:53.209: INFO: Created: latency-svc-5s7rk Apr 22 21:14:53.224: INFO: Got endpoints: latency-svc-5s7rk [868.740141ms] Apr 22 21:14:53.254: INFO: Created: latency-svc-72pwh Apr 22 21:14:53.272: INFO: Got endpoints: latency-svc-72pwh [867.653665ms] Apr 22 21:14:53.355: INFO: Created: latency-svc-ncgld Apr 22 21:14:53.361: INFO: Got endpoints: latency-svc-ncgld [872.548025ms] Apr 22 21:14:53.383: INFO: Created: latency-svc-cxxlh Apr 22 21:14:53.398: INFO: Got endpoints: latency-svc-cxxlh [851.579159ms] Apr 22 21:14:53.424: INFO: Created: latency-svc-6fxnj Apr 22 21:14:53.510: INFO: Got endpoints: latency-svc-6fxnj [882.853995ms] Apr 22 21:14:53.548: INFO: Created: latency-svc-jpgl4 Apr 22 21:14:53.573: INFO: Got endpoints: latency-svc-jpgl4 [910.774963ms] Apr 22 21:14:53.604: INFO: Created: latency-svc-ljh5t Apr 22 21:14:53.648: INFO: Got endpoints: latency-svc-ljh5t [947.679564ms] Apr 22 21:14:53.705: INFO: Created: latency-svc-rfmbf Apr 22 21:14:53.717: INFO: Got endpoints: latency-svc-rfmbf [951.475546ms] Apr 22 21:14:53.747: INFO: Created: latency-svc-jgskx Apr 22 21:14:53.785: INFO: Got endpoints: latency-svc-jgskx [982.884028ms] Apr 22 21:14:53.868: INFO: Created: latency-svc-j85rz Apr 22 21:14:53.917: INFO: Got endpoints: latency-svc-j85rz [1.017115106s] Apr 22 21:14:53.970: INFO: Created: latency-svc-7xtj4 Apr 22 21:14:53.994: INFO: Got endpoints: latency-svc-7xtj4 [1.046690712s] Apr 22 21:14:54.055: INFO: Created: latency-svc-nl86f Apr 22 21:14:54.060: INFO: Got endpoints: latency-svc-nl86f [1.077130483s] Apr 22 21:14:54.114: INFO: Created: latency-svc-wf6vv Apr 22 21:14:54.132: INFO: Got endpoints: latency-svc-wf6vv [1.065385939s] Apr 22 21:14:54.186: INFO: Created: latency-svc-z86ln Apr 22 21:14:54.192: INFO: Got endpoints: latency-svc-z86ln [1.089018478s] Apr 22 21:14:54.238: INFO: Created: latency-svc-67hw4 Apr 22 21:14:54.259: INFO: Got endpoints: latency-svc-67hw4 [1.119404855s] Apr 22 21:14:54.342: INFO: Created: latency-svc-kqnhs Apr 22 21:14:54.367: INFO: Got endpoints: latency-svc-kqnhs [1.142631374s] Apr 22 21:14:54.396: INFO: Created: latency-svc-gq4xd Apr 22 21:14:54.409: INFO: Got endpoints: latency-svc-gq4xd [1.137158952s] Apr 22 21:14:54.461: INFO: Created: latency-svc-sblsb Apr 22 21:14:54.497: INFO: Got endpoints: latency-svc-sblsb [1.135467337s] Apr 22 21:14:54.527: INFO: Created: latency-svc-8xc2l Apr 22 21:14:54.536: INFO: Got endpoints: latency-svc-8xc2l [1.137690548s] Apr 22 21:14:54.606: INFO: Created: latency-svc-s67nw Apr 22 21:14:54.636: INFO: Got endpoints: latency-svc-s67nw [1.126541341s] Apr 22 21:14:54.666: INFO: Created: latency-svc-n5wqg Apr 22 21:14:54.680: INFO: Got endpoints: latency-svc-n5wqg [1.106147853s] Apr 22 21:14:54.755: INFO: Created: latency-svc-nmr6q Apr 22 21:14:54.758: INFO: Got endpoints: latency-svc-nmr6q [1.110475496s] Apr 22 21:14:54.791: INFO: Created: latency-svc-gktdm Apr 22 21:14:54.802: INFO: Got endpoints: latency-svc-gktdm [1.08534152s] Apr 22 21:14:54.822: INFO: Created: latency-svc-4sf55 Apr 22 21:14:54.845: INFO: Got endpoints: latency-svc-4sf55 [1.059931701s] Apr 22 21:14:54.912: INFO: Created: latency-svc-5qzzk Apr 22 21:14:54.917: INFO: Got endpoints: latency-svc-5qzzk [999.80388ms] Apr 22 21:14:54.952: INFO: Created: latency-svc-zts72 Apr 22 21:14:54.977: INFO: Got endpoints: latency-svc-zts72 [983.486563ms] Apr 22 21:14:55.006: INFO: Created: latency-svc-fvcvs Apr 22 21:14:55.067: INFO: Got endpoints: latency-svc-fvcvs [1.007379193s] Apr 22 21:14:55.068: INFO: Created: latency-svc-pkrwg Apr 22 21:14:55.073: INFO: Got endpoints: latency-svc-pkrwg [940.844991ms] Apr 22 21:14:55.104: INFO: Created: latency-svc-cwrvs Apr 22 21:14:55.156: INFO: Got endpoints: latency-svc-cwrvs [963.606763ms] Apr 22 21:14:55.204: INFO: Created: latency-svc-vkz9x Apr 22 21:14:55.219: INFO: Got endpoints: latency-svc-vkz9x [959.792056ms] Apr 22 21:14:55.254: INFO: Created: latency-svc-zqqrv Apr 22 21:14:55.273: INFO: Got endpoints: latency-svc-zqqrv [906.200436ms] Apr 22 21:14:55.366: INFO: Created: latency-svc-s58cj Apr 22 21:14:55.370: INFO: Got endpoints: latency-svc-s58cj [961.135323ms] Apr 22 21:14:55.408: INFO: Created: latency-svc-bpzlk Apr 22 21:14:55.423: INFO: Got endpoints: latency-svc-bpzlk [925.772135ms] Apr 22 21:14:55.440: INFO: Created: latency-svc-dhr5n Apr 22 21:14:55.515: INFO: Got endpoints: latency-svc-dhr5n [979.209927ms] Apr 22 21:14:55.552: INFO: Created: latency-svc-996n8 Apr 22 21:14:55.567: INFO: Got endpoints: latency-svc-996n8 [930.66655ms] Apr 22 21:14:55.594: INFO: Created: latency-svc-x67jx Apr 22 21:14:55.611: INFO: Got endpoints: latency-svc-x67jx [930.811354ms] Apr 22 21:14:55.647: INFO: Created: latency-svc-wzr27 Apr 22 21:14:55.663: INFO: Got endpoints: latency-svc-wzr27 [905.101547ms] Apr 22 21:14:55.692: INFO: Created: latency-svc-psj46 Apr 22 21:14:55.706: INFO: Got endpoints: latency-svc-psj46 [903.812388ms] Apr 22 21:14:55.728: INFO: Created: latency-svc-29pc6 Apr 22 21:14:55.736: INFO: Got endpoints: latency-svc-29pc6 [891.216559ms] Apr 22 21:14:55.804: INFO: Created: latency-svc-8jcks Apr 22 21:14:55.847: INFO: Got endpoints: latency-svc-8jcks [930.292778ms] Apr 22 21:14:55.847: INFO: Created: latency-svc-ttcpz Apr 22 21:14:55.946: INFO: Got endpoints: latency-svc-ttcpz [969.254002ms] Apr 22 21:14:55.990: INFO: Created: latency-svc-km2b2 Apr 22 21:14:56.013: INFO: Got endpoints: latency-svc-km2b2 [946.071093ms] Apr 22 21:14:56.038: INFO: Created: latency-svc-6s6x4 Apr 22 21:14:56.079: INFO: Got endpoints: latency-svc-6s6x4 [1.005709357s] Apr 22 21:14:56.124: INFO: Created: latency-svc-7fx8p Apr 22 21:14:56.133: INFO: Got endpoints: latency-svc-7fx8p [977.542621ms] Apr 22 21:14:56.164: INFO: Created: latency-svc-mfjlz Apr 22 21:14:56.210: INFO: Got endpoints: latency-svc-mfjlz [991.442995ms] Apr 22 21:14:56.224: INFO: Created: latency-svc-6xdlb Apr 22 21:14:56.266: INFO: Got endpoints: latency-svc-6xdlb [993.068517ms] Apr 22 21:14:56.293: INFO: Created: latency-svc-dbps4 Apr 22 21:14:56.360: INFO: Got endpoints: latency-svc-dbps4 [989.46982ms] Apr 22 21:14:56.398: INFO: Created: latency-svc-9nwzp Apr 22 21:14:56.422: INFO: Got endpoints: latency-svc-9nwzp [999.518719ms] Apr 22 21:14:56.458: INFO: Created: latency-svc-mvm8j Apr 22 21:14:56.515: INFO: Got endpoints: latency-svc-mvm8j [999.643812ms] Apr 22 21:14:56.554: INFO: Created: latency-svc-xt9td Apr 22 21:14:56.560: INFO: Got endpoints: latency-svc-xt9td [993.258532ms] Apr 22 21:14:56.584: INFO: Created: latency-svc-fkwt9 Apr 22 21:14:56.597: INFO: Got endpoints: latency-svc-fkwt9 [986.480642ms] Apr 22 21:14:56.665: INFO: Created: latency-svc-vf5pg Apr 22 21:14:56.686: INFO: Got endpoints: latency-svc-vf5pg [1.022487716s] Apr 22 21:14:56.718: INFO: Created: latency-svc-kg8h2 Apr 22 21:14:56.730: INFO: Got endpoints: latency-svc-kg8h2 [1.02378662s] Apr 22 21:14:56.754: INFO: Created: latency-svc-hb5hv Apr 22 21:14:56.827: INFO: Got endpoints: latency-svc-hb5hv [1.090448336s] Apr 22 21:14:56.828: INFO: Created: latency-svc-9rr59 Apr 22 21:14:56.848: INFO: Got endpoints: latency-svc-9rr59 [1.000904083s] Apr 22 21:14:56.896: INFO: Created: latency-svc-wwjfw Apr 22 21:14:56.911: INFO: Got endpoints: latency-svc-wwjfw [964.098489ms] Apr 22 21:14:56.977: INFO: Created: latency-svc-dth8f Apr 22 21:14:56.980: INFO: Got endpoints: latency-svc-dth8f [966.578655ms] Apr 22 21:14:57.006: INFO: Created: latency-svc-cmprc Apr 22 21:14:57.019: INFO: Got endpoints: latency-svc-cmprc [939.945557ms] Apr 22 21:14:57.046: INFO: Created: latency-svc-886wv Apr 22 21:14:57.061: INFO: Got endpoints: latency-svc-886wv [927.899738ms] Apr 22 21:14:57.110: INFO: Created: latency-svc-vl5sp Apr 22 21:14:57.136: INFO: Got endpoints: latency-svc-vl5sp [925.831696ms] Apr 22 21:14:57.136: INFO: Created: latency-svc-fb5w7 Apr 22 21:14:57.152: INFO: Got endpoints: latency-svc-fb5w7 [885.606279ms] Apr 22 21:14:57.174: INFO: Created: latency-svc-9r778 Apr 22 21:14:57.188: INFO: Got endpoints: latency-svc-9r778 [828.399298ms] Apr 22 21:14:57.258: INFO: Created: latency-svc-dxt79 Apr 22 21:14:57.262: INFO: Got endpoints: latency-svc-dxt79 [839.863813ms] Apr 22 21:14:57.292: INFO: Created: latency-svc-x9g5f Apr 22 21:14:57.308: INFO: Got endpoints: latency-svc-x9g5f [793.075519ms] Apr 22 21:14:57.335: INFO: Created: latency-svc-8sckt Apr 22 21:14:57.350: INFO: Got endpoints: latency-svc-8sckt [789.973773ms] Apr 22 21:14:57.396: INFO: Created: latency-svc-nf99s Apr 22 21:14:57.411: INFO: Got endpoints: latency-svc-nf99s [813.804175ms] Apr 22 21:14:57.438: INFO: Created: latency-svc-7fh26 Apr 22 21:14:57.447: INFO: Got endpoints: latency-svc-7fh26 [761.427473ms] Apr 22 21:14:57.474: INFO: Created: latency-svc-ggrxp Apr 22 21:14:57.483: INFO: Got endpoints: latency-svc-ggrxp [753.095233ms] Apr 22 21:14:57.576: INFO: Created: latency-svc-vcq88 Apr 22 21:14:57.579: INFO: Got endpoints: latency-svc-vcq88 [752.333539ms] Apr 22 21:14:57.611: INFO: Created: latency-svc-pdzdd Apr 22 21:14:57.628: INFO: Got endpoints: latency-svc-pdzdd [780.221491ms] Apr 22 21:14:57.659: INFO: Created: latency-svc-vpzt7 Apr 22 21:14:57.670: INFO: Got endpoints: latency-svc-vpzt7 [759.545565ms] Apr 22 21:14:57.719: INFO: Created: latency-svc-jszzx Apr 22 21:14:57.729: INFO: Got endpoints: latency-svc-jszzx [748.911185ms] Apr 22 21:14:57.754: INFO: Created: latency-svc-6fl6h Apr 22 21:14:57.765: INFO: Got endpoints: latency-svc-6fl6h [746.198962ms] Apr 22 21:14:57.790: INFO: Created: latency-svc-9924t Apr 22 21:14:57.801: INFO: Got endpoints: latency-svc-9924t [739.9369ms] Apr 22 21:14:57.883: INFO: Created: latency-svc-ft9nq Apr 22 21:14:57.898: INFO: Got endpoints: latency-svc-ft9nq [761.488454ms] Apr 22 21:14:57.928: INFO: Created: latency-svc-kv98h Apr 22 21:14:57.940: INFO: Got endpoints: latency-svc-kv98h [788.384519ms] Apr 22 21:14:57.964: INFO: Created: latency-svc-n5rdd Apr 22 21:14:57.978: INFO: Got endpoints: latency-svc-n5rdd [789.58297ms] Apr 22 21:14:58.013: INFO: Created: latency-svc-qtj74 Apr 22 21:14:58.024: INFO: Got endpoints: latency-svc-qtj74 [762.054265ms] Apr 22 21:14:58.044: INFO: Created: latency-svc-tzp69 Apr 22 21:14:58.061: INFO: Got endpoints: latency-svc-tzp69 [752.730196ms] Apr 22 21:14:58.081: INFO: Created: latency-svc-442mk Apr 22 21:14:58.097: INFO: Got endpoints: latency-svc-442mk [746.936169ms] Apr 22 21:14:58.138: INFO: Created: latency-svc-4blsr Apr 22 21:14:58.146: INFO: Got endpoints: latency-svc-4blsr [734.80472ms] Apr 22 21:14:58.167: INFO: Created: latency-svc-zk9pw Apr 22 21:14:58.186: INFO: Got endpoints: latency-svc-zk9pw [738.227926ms] Apr 22 21:14:58.216: INFO: Created: latency-svc-wllbk Apr 22 21:14:58.224: INFO: Got endpoints: latency-svc-wllbk [740.782514ms] Apr 22 21:14:58.282: INFO: Created: latency-svc-mzf4f Apr 22 21:14:58.290: INFO: Got endpoints: latency-svc-mzf4f [711.272301ms] Apr 22 21:14:58.326: INFO: Created: latency-svc-z9kjk Apr 22 21:14:58.338: INFO: Got endpoints: latency-svc-z9kjk [710.258412ms] Apr 22 21:14:58.360: INFO: Created: latency-svc-c7t94 Apr 22 21:14:58.375: INFO: Got endpoints: latency-svc-c7t94 [704.335247ms] Apr 22 21:14:58.419: INFO: Created: latency-svc-zbbdl Apr 22 21:14:58.423: INFO: Got endpoints: latency-svc-zbbdl [694.436557ms] Apr 22 21:14:58.452: INFO: Created: latency-svc-z94t5 Apr 22 21:14:58.471: INFO: Got endpoints: latency-svc-z94t5 [705.97265ms] Apr 22 21:14:58.512: INFO: Created: latency-svc-xd5xq Apr 22 21:14:58.564: INFO: Got endpoints: latency-svc-xd5xq [762.256316ms] Apr 22 21:14:58.594: INFO: Created: latency-svc-9w57x Apr 22 21:14:58.604: INFO: Got endpoints: latency-svc-9w57x [705.989798ms] Apr 22 21:14:58.632: INFO: Created: latency-svc-ll698 Apr 22 21:14:58.646: INFO: Got endpoints: latency-svc-ll698 [706.016653ms] Apr 22 21:14:58.707: INFO: Created: latency-svc-9q66c Apr 22 21:14:58.713: INFO: Got endpoints: latency-svc-9q66c [735.250927ms] Apr 22 21:14:58.749: INFO: Created: latency-svc-wsc8m Apr 22 21:14:58.774: INFO: Got endpoints: latency-svc-wsc8m [749.364507ms] Apr 22 21:14:58.806: INFO: Created: latency-svc-kwk5b Apr 22 21:14:58.863: INFO: Got endpoints: latency-svc-kwk5b [801.890788ms] Apr 22 21:14:58.865: INFO: Created: latency-svc-fjplt Apr 22 21:14:58.890: INFO: Got endpoints: latency-svc-fjplt [792.742019ms] Apr 22 21:14:58.926: INFO: Created: latency-svc-2cxwb Apr 22 21:14:58.948: INFO: Got endpoints: latency-svc-2cxwb [801.76667ms] Apr 22 21:14:59.001: INFO: Created: latency-svc-pz4lc Apr 22 21:14:59.008: INFO: Got endpoints: latency-svc-pz4lc [822.076051ms] Apr 22 21:14:59.051: INFO: Created: latency-svc-w6p4f Apr 22 21:14:59.068: INFO: Got endpoints: latency-svc-w6p4f [843.696125ms] Apr 22 21:14:59.087: INFO: Created: latency-svc-6gjzn Apr 22 21:14:59.132: INFO: Got endpoints: latency-svc-6gjzn [841.440469ms] Apr 22 21:14:59.148: INFO: Created: latency-svc-ktcnw Apr 22 21:14:59.165: INFO: Got endpoints: latency-svc-ktcnw [826.343931ms] Apr 22 21:14:59.187: INFO: Created: latency-svc-5vn27 Apr 22 21:14:59.213: INFO: Got endpoints: latency-svc-5vn27 [838.223692ms] Apr 22 21:14:59.272: INFO: Created: latency-svc-qglxq Apr 22 21:14:59.303: INFO: Got endpoints: latency-svc-qglxq [879.965208ms] Apr 22 21:14:59.321: INFO: Created: latency-svc-5x8x9 Apr 22 21:14:59.339: INFO: Got endpoints: latency-svc-5x8x9 [868.024496ms] Apr 22 21:14:59.357: INFO: Created: latency-svc-2j6j4 Apr 22 21:14:59.402: INFO: Got endpoints: latency-svc-2j6j4 [838.264142ms] Apr 22 21:14:59.409: INFO: Created: latency-svc-xw8qn Apr 22 21:14:59.424: INFO: Got endpoints: latency-svc-xw8qn [820.085417ms] Apr 22 21:14:59.452: INFO: Created: latency-svc-6fgk6 Apr 22 21:14:59.466: INFO: Got endpoints: latency-svc-6fgk6 [819.915564ms] Apr 22 21:14:59.575: INFO: Created: latency-svc-ndg2w Apr 22 21:14:59.603: INFO: Created: latency-svc-crxc2 Apr 22 21:14:59.604: INFO: Got endpoints: latency-svc-ndg2w [890.567563ms] Apr 22 21:14:59.617: INFO: Got endpoints: latency-svc-crxc2 [843.045173ms] Apr 22 21:14:59.643: INFO: Created: latency-svc-qkfdt Apr 22 21:14:59.659: INFO: Got endpoints: latency-svc-qkfdt [796.044675ms] Apr 22 21:14:59.719: INFO: Created: latency-svc-r9m66 Apr 22 21:14:59.731: INFO: Got endpoints: latency-svc-r9m66 [840.838631ms] Apr 22 21:14:59.751: INFO: Created: latency-svc-9nmjx Apr 22 21:14:59.761: INFO: Got endpoints: latency-svc-9nmjx [813.744968ms] Apr 22 21:14:59.784: INFO: Created: latency-svc-tcpcl Apr 22 21:14:59.813: INFO: Got endpoints: latency-svc-tcpcl [805.430314ms] Apr 22 21:14:59.868: INFO: Created: latency-svc-ltcxd Apr 22 21:14:59.876: INFO: Got endpoints: latency-svc-ltcxd [808.228908ms] Apr 22 21:14:59.926: INFO: Created: latency-svc-pxzms Apr 22 21:14:59.942: INFO: Got endpoints: latency-svc-pxzms [810.155768ms] Apr 22 21:14:59.961: INFO: Created: latency-svc-h79ln Apr 22 21:15:00.000: INFO: Got endpoints: latency-svc-h79ln [835.546323ms] Apr 22 21:15:00.011: INFO: Created: latency-svc-s9tfd Apr 22 21:15:00.027: INFO: Got endpoints: latency-svc-s9tfd [814.138587ms] Apr 22 21:15:00.047: INFO: Created: latency-svc-5ds56 Apr 22 21:15:00.063: INFO: Got endpoints: latency-svc-5ds56 [759.572226ms] Apr 22 21:15:00.083: INFO: Created: latency-svc-t72s7 Apr 22 21:15:00.099: INFO: Got endpoints: latency-svc-t72s7 [759.86521ms] Apr 22 21:15:00.153: INFO: Created: latency-svc-xn65m Apr 22 21:15:00.166: INFO: Got endpoints: latency-svc-xn65m [763.754409ms] Apr 22 21:15:00.202: INFO: Created: latency-svc-rjl87 Apr 22 21:15:00.214: INFO: Got endpoints: latency-svc-rjl87 [790.134653ms] Apr 22 21:15:00.233: INFO: Created: latency-svc-k6ln8 Apr 22 21:15:00.318: INFO: Got endpoints: latency-svc-k6ln8 [852.015497ms] Apr 22 21:15:00.319: INFO: Created: latency-svc-ns8vc Apr 22 21:15:00.328: INFO: Got endpoints: latency-svc-ns8vc [724.038871ms] Apr 22 21:15:00.357: INFO: Created: latency-svc-crsgd Apr 22 21:15:00.381: INFO: Got endpoints: latency-svc-crsgd [764.517134ms] Apr 22 21:15:00.414: INFO: Created: latency-svc-qg8j9 Apr 22 21:15:00.468: INFO: Got endpoints: latency-svc-qg8j9 [808.417365ms] Apr 22 21:15:00.473: INFO: Created: latency-svc-hks59 Apr 22 21:15:00.497: INFO: Got endpoints: latency-svc-hks59 [766.446682ms] Apr 22 21:15:00.562: INFO: Created: latency-svc-7pl4n Apr 22 21:15:00.623: INFO: Got endpoints: latency-svc-7pl4n [861.674658ms] Apr 22 21:15:00.630: INFO: Created: latency-svc-x6h9l Apr 22 21:15:00.636: INFO: Got endpoints: latency-svc-x6h9l [822.98736ms] Apr 22 21:15:00.665: INFO: Created: latency-svc-mr4vj Apr 22 21:15:00.679: INFO: Got endpoints: latency-svc-mr4vj [55.956931ms] Apr 22 21:15:00.711: INFO: Created: latency-svc-z6clh Apr 22 21:15:00.761: INFO: Got endpoints: latency-svc-z6clh [884.712512ms] Apr 22 21:15:00.765: INFO: Created: latency-svc-khvnx Apr 22 21:15:00.781: INFO: Got endpoints: latency-svc-khvnx [839.212393ms] Apr 22 21:15:00.802: INFO: Created: latency-svc-f9g5n Apr 22 21:15:00.818: INFO: Got endpoints: latency-svc-f9g5n [817.35655ms] Apr 22 21:15:00.839: INFO: Created: latency-svc-nrdc7 Apr 22 21:15:00.848: INFO: Got endpoints: latency-svc-nrdc7 [820.456048ms] Apr 22 21:15:00.925: INFO: Created: latency-svc-9kxcx Apr 22 21:15:00.926: INFO: Got endpoints: latency-svc-9kxcx [863.19167ms] Apr 22 21:15:00.964: INFO: Created: latency-svc-gm4tm Apr 22 21:15:01.234: INFO: Got endpoints: latency-svc-gm4tm [1.13460586s] Apr 22 21:15:01.247: INFO: Created: latency-svc-r7zlt Apr 22 21:15:01.552: INFO: Got endpoints: latency-svc-r7zlt [1.386092036s] Apr 22 21:15:01.572: INFO: Created: latency-svc-dnt2r Apr 22 21:15:01.580: INFO: Got endpoints: latency-svc-dnt2r [1.366408028s] Apr 22 21:15:01.604: INFO: Created: latency-svc-65t9t Apr 22 21:15:01.623: INFO: Got endpoints: latency-svc-65t9t [1.304623276s] Apr 22 21:15:01.690: INFO: Created: latency-svc-6k6zt Apr 22 21:15:01.692: INFO: Got endpoints: latency-svc-6k6zt [1.363936577s] Apr 22 21:15:01.727: INFO: Created: latency-svc-62c78 Apr 22 21:15:01.744: INFO: Got endpoints: latency-svc-62c78 [1.362253349s] Apr 22 21:15:01.769: INFO: Created: latency-svc-rlft9 Apr 22 21:15:01.779: INFO: Got endpoints: latency-svc-rlft9 [1.311752529s] Apr 22 21:15:01.827: INFO: Created: latency-svc-7s9bs Apr 22 21:15:01.850: INFO: Got endpoints: latency-svc-7s9bs [1.352817728s] Apr 22 21:15:01.851: INFO: Created: latency-svc-xf66c Apr 22 21:15:01.880: INFO: Got endpoints: latency-svc-xf66c [1.244078492s] Apr 22 21:15:01.880: INFO: Latencies: [55.956931ms 57.563059ms 117.163266ms 174.907321ms 268.826619ms 694.436557ms 704.335247ms 705.97265ms 705.989798ms 706.016653ms 710.258412ms 710.559971ms 711.272301ms 724.038871ms 734.80472ms 735.250927ms 738.227926ms 739.9369ms 740.782514ms 746.198962ms 746.936169ms 748.911185ms 749.364507ms 752.333539ms 752.730196ms 753.095233ms 759.545565ms 759.572226ms 759.86521ms 761.427473ms 761.488454ms 762.054265ms 762.256316ms 763.754409ms 763.823813ms 764.517134ms 764.882899ms 766.446682ms 774.479509ms 780.221491ms 788.384519ms 789.58297ms 789.973773ms 790.134653ms 792.742019ms 793.075519ms 796.044675ms 801.76667ms 801.890788ms 805.430314ms 808.228908ms 808.417365ms 810.155768ms 813.744968ms 813.804175ms 814.138587ms 817.35655ms 819.915564ms 820.085417ms 820.456048ms 822.076051ms 822.98736ms 826.343931ms 828.399298ms 835.546323ms 836.162516ms 838.223692ms 838.264142ms 839.212393ms 839.621001ms 839.863813ms 840.838631ms 841.390961ms 841.440469ms 843.019559ms 843.045173ms 843.696125ms 849.665673ms 850.992913ms 851.579159ms 852.015497ms 853.566423ms 855.441488ms 858.048466ms 860.057161ms 860.371107ms 861.674658ms 861.92825ms 863.19167ms 864.284783ms 866.384852ms 867.653665ms 868.024496ms 868.740141ms 870.633694ms 871.064423ms 872.343111ms 872.548025ms 872.671941ms 877.628429ms 879.965208ms 882.843276ms 882.853995ms 884.232635ms 884.712512ms 885.302821ms 885.374834ms 885.606279ms 890.567563ms 891.216559ms 903.812388ms 904.502248ms 904.750345ms 904.832053ms 905.101547ms 906.200436ms 910.774963ms 922.081852ms 925.772135ms 925.831696ms 927.899738ms 930.292778ms 930.66655ms 930.811354ms 939.945557ms 940.844991ms 941.017393ms 941.824625ms 946.071093ms 947.679564ms 951.475546ms 957.453709ms 959.792056ms 961.135323ms 963.606763ms 964.098489ms 966.578655ms 969.254002ms 972.125532ms 977.542621ms 977.627926ms 979.209927ms 980.684136ms 982.884028ms 983.486563ms 986.480642ms 989.46982ms 991.442995ms 992.434339ms 993.068517ms 993.258532ms 999.518719ms 999.643812ms 999.80388ms 1.000904083s 1.005709357s 1.007379193s 1.017115106s 1.022487716s 1.02378662s 1.025524592s 1.042693646s 1.046690712s 1.048145093s 1.059931701s 1.065385939s 1.077130483s 1.08534152s 1.089018478s 1.090448336s 1.090711354s 1.106147853s 1.110475496s 1.119404855s 1.126541341s 1.13460586s 1.135467337s 1.137158952s 1.137690548s 1.142631374s 1.176216472s 1.215476797s 1.244078492s 1.245769485s 1.304623276s 1.311752529s 1.31563069s 1.343650262s 1.352817728s 1.362253349s 1.363936577s 1.366408028s 1.379820858s 1.386092036s 1.423107889s 1.461419552s 1.465680564s 1.476224769s 1.486624949s 1.502018047s] Apr 22 21:15:01.881: INFO: 50 %ile: 879.965208ms Apr 22 21:15:01.881: INFO: 90 %ile: 1.176216472s Apr 22 21:15:01.881: INFO: 99 %ile: 1.486624949s Apr 22 21:15:01.881: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:15:01.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-5460" for this suite. • [SLOW TEST:16.082 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":19,"skipped":242,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:15:01.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 22 21:15:02.755: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 22 21:15:04.766: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723186902, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723186902, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723186902, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723186902, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 22 21:15:07.798: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:15:08.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6660" for this suite. STEP: Destroying namespace "webhook-6660-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.552 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":20,"skipped":284,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:15:08.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Apr 22 21:15:13.426: INFO: Successfully updated pod "labelsupdate5121bf7d-eeb4-48ed-86b1-d0b2bb38dce6" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:15:15.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-427" for this suite. • [SLOW TEST:7.054 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":319,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:15:15.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 22 21:15:26.600: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 22 21:15:26.642: INFO: Pod pod-with-prestop-http-hook still exists Apr 22 21:15:28.642: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 22 21:15:28.654: INFO: Pod pod-with-prestop-http-hook still exists Apr 22 21:15:30.642: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 22 21:15:30.646: INFO: Pod pod-with-prestop-http-hook still exists Apr 22 21:15:32.642: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 22 21:15:32.647: INFO: Pod pod-with-prestop-http-hook still exists Apr 22 21:15:34.642: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 22 21:15:34.650: INFO: Pod pod-with-prestop-http-hook still exists Apr 22 21:15:36.642: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 22 21:15:36.647: INFO: Pod pod-with-prestop-http-hook still exists Apr 22 21:15:38.642: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 22 21:15:38.647: INFO: Pod pod-with-prestop-http-hook still exists Apr 22 21:15:40.642: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 22 21:15:40.646: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:15:40.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1082" for this suite. • [SLOW TEST:25.160 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":325,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:15:40.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Apr 22 21:15:40.739: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:15:57.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9888" for this suite. • [SLOW TEST:16.844 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":23,"skipped":327,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:15:57.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:16:01.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3543" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":24,"skipped":349,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:16:01.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 22 21:16:01.816: INFO: Waiting up to 5m0s for pod "pod-45045e68-14db-4f6e-86ba-9a227d3033ac" in namespace "emptydir-5110" to be "success or failure" Apr 22 21:16:01.823: INFO: Pod "pod-45045e68-14db-4f6e-86ba-9a227d3033ac": Phase="Pending", Reason="", readiness=false. Elapsed: 6.598742ms Apr 22 21:16:03.826: INFO: Pod "pod-45045e68-14db-4f6e-86ba-9a227d3033ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009990491s Apr 22 21:16:05.834: INFO: Pod "pod-45045e68-14db-4f6e-86ba-9a227d3033ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018253113s STEP: Saw pod success Apr 22 21:16:05.834: INFO: Pod "pod-45045e68-14db-4f6e-86ba-9a227d3033ac" satisfied condition "success or failure" Apr 22 21:16:05.836: INFO: Trying to get logs from node jerma-worker2 pod pod-45045e68-14db-4f6e-86ba-9a227d3033ac container test-container: STEP: delete the pod Apr 22 21:16:05.895: INFO: Waiting for pod pod-45045e68-14db-4f6e-86ba-9a227d3033ac to disappear Apr 22 21:16:05.906: INFO: Pod pod-45045e68-14db-4f6e-86ba-9a227d3033ac no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:16:05.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5110" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":25,"skipped":377,"failed":0} ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:16:05.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:16:05.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5839" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":26,"skipped":377,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:16:05.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Apr 22 21:16:06.077: INFO: namespace kubectl-977 Apr 22 21:16:06.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-977' Apr 22 21:16:06.398: INFO: stderr: "" Apr 22 21:16:06.398: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 22 21:16:07.416: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 21:16:07.416: INFO: Found 0 / 1 Apr 22 21:16:08.402: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 21:16:08.402: INFO: Found 0 / 1 Apr 22 21:16:09.403: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 21:16:09.403: INFO: Found 0 / 1 Apr 22 21:16:10.403: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 21:16:10.403: INFO: Found 1 / 1 Apr 22 21:16:10.403: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 22 21:16:10.407: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 21:16:10.407: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 22 21:16:10.407: INFO: wait on agnhost-master startup in kubectl-977 Apr 22 21:16:10.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-vsbq9 agnhost-master --namespace=kubectl-977' Apr 22 21:16:10.519: INFO: stderr: "" Apr 22 21:16:10.520: INFO: stdout: "Paused\n" STEP: exposing RC Apr 22 21:16:10.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-977' Apr 22 21:16:10.661: INFO: stderr: "" Apr 22 21:16:10.661: INFO: stdout: "service/rm2 exposed\n" Apr 22 21:16:10.673: INFO: Service rm2 in namespace kubectl-977 found. STEP: exposing service Apr 22 21:16:12.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-977' Apr 22 21:16:12.814: INFO: stderr: "" Apr 22 21:16:12.814: INFO: stdout: "service/rm3 exposed\n" Apr 22 21:16:12.818: INFO: Service rm3 in namespace kubectl-977 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:16:14.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-977" for this suite. • [SLOW TEST:8.848 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1188 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":27,"skipped":403,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:16:14.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 22 21:16:15.530: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 22 21:16:17.540: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723186975, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723186975, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723186975, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723186975, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 22 21:16:20.574: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 22 21:16:20.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:16:21.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6507" for this suite. STEP: Destroying namespace "webhook-6507-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.078 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":28,"skipped":431,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:16:21.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-7443 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 22 21:16:21.982: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 22 21:16:48.156: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.254:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7443 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 22 21:16:48.156: INFO: >>> kubeConfig: /root/.kube/config I0422 21:16:48.191798 6 log.go:172] (0xc001a46b00) (0xc002759220) Create stream I0422 21:16:48.191833 6 log.go:172] (0xc001a46b00) (0xc002759220) Stream added, broadcasting: 1 I0422 21:16:48.194345 6 log.go:172] (0xc001a46b00) Reply frame received for 1 I0422 21:16:48.194389 6 log.go:172] (0xc001a46b00) (0xc0027592c0) Create stream I0422 21:16:48.194403 6 log.go:172] (0xc001a46b00) (0xc0027592c0) Stream added, broadcasting: 3 I0422 21:16:48.195454 6 log.go:172] (0xc001a46b00) Reply frame received for 3 I0422 21:16:48.195514 6 log.go:172] (0xc001a46b00) (0xc00230a500) Create stream I0422 21:16:48.195532 6 log.go:172] (0xc001a46b00) (0xc00230a500) Stream added, broadcasting: 5 I0422 21:16:48.196671 6 log.go:172] (0xc001a46b00) Reply frame received for 5 I0422 21:16:48.271211 6 log.go:172] (0xc001a46b00) Data frame received for 3 I0422 21:16:48.271263 6 log.go:172] (0xc0027592c0) (3) Data frame handling I0422 21:16:48.271302 6 log.go:172] (0xc0027592c0) (3) Data frame sent I0422 21:16:48.271433 6 log.go:172] (0xc001a46b00) Data frame received for 3 I0422 21:16:48.271471 6 log.go:172] (0xc0027592c0) (3) Data frame handling I0422 21:16:48.271532 6 log.go:172] (0xc001a46b00) Data frame received for 5 I0422 21:16:48.271566 6 log.go:172] (0xc00230a500) (5) Data frame handling I0422 21:16:48.273441 6 log.go:172] (0xc001a46b00) Data frame received for 1 I0422 21:16:48.273475 6 log.go:172] (0xc002759220) (1) Data frame handling I0422 21:16:48.273495 6 log.go:172] (0xc002759220) (1) Data frame sent I0422 21:16:48.273519 6 log.go:172] (0xc001a46b00) (0xc002759220) Stream removed, broadcasting: 1 I0422 21:16:48.273540 6 log.go:172] (0xc001a46b00) Go away received I0422 21:16:48.274048 6 log.go:172] (0xc001a46b00) (0xc002759220) Stream removed, broadcasting: 1 I0422 21:16:48.274077 6 log.go:172] (0xc001a46b00) (0xc0027592c0) Stream removed, broadcasting: 3 I0422 21:16:48.274098 6 log.go:172] (0xc001a46b00) (0xc00230a500) Stream removed, broadcasting: 5 Apr 22 21:16:48.274: INFO: Found all expected endpoints: [netserver-0] Apr 22 21:16:48.277: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.128:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7443 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 22 21:16:48.277: INFO: >>> kubeConfig: /root/.kube/config I0422 21:16:48.312953 6 log.go:172] (0xc0017da630) (0xc001ee2500) Create stream I0422 21:16:48.312983 6 log.go:172] (0xc0017da630) (0xc001ee2500) Stream added, broadcasting: 1 I0422 21:16:48.314895 6 log.go:172] (0xc0017da630) Reply frame received for 1 I0422 21:16:48.314936 6 log.go:172] (0xc0017da630) (0xc001d981e0) Create stream I0422 21:16:48.314951 6 log.go:172] (0xc0017da630) (0xc001d981e0) Stream added, broadcasting: 3 I0422 21:16:48.315827 6 log.go:172] (0xc0017da630) Reply frame received for 3 I0422 21:16:48.315885 6 log.go:172] (0xc0017da630) (0xc001ee25a0) Create stream I0422 21:16:48.315906 6 log.go:172] (0xc0017da630) (0xc001ee25a0) Stream added, broadcasting: 5 I0422 21:16:48.316820 6 log.go:172] (0xc0017da630) Reply frame received for 5 I0422 21:16:48.389276 6 log.go:172] (0xc0017da630) Data frame received for 3 I0422 21:16:48.389314 6 log.go:172] (0xc001d981e0) (3) Data frame handling I0422 21:16:48.389336 6 log.go:172] (0xc001d981e0) (3) Data frame sent I0422 21:16:48.389347 6 log.go:172] (0xc0017da630) Data frame received for 3 I0422 21:16:48.389356 6 log.go:172] (0xc001d981e0) (3) Data frame handling I0422 21:16:48.389385 6 log.go:172] (0xc0017da630) Data frame received for 5 I0422 21:16:48.389402 6 log.go:172] (0xc001ee25a0) (5) Data frame handling I0422 21:16:48.390950 6 log.go:172] (0xc0017da630) Data frame received for 1 I0422 21:16:48.390970 6 log.go:172] (0xc001ee2500) (1) Data frame handling I0422 21:16:48.390979 6 log.go:172] (0xc001ee2500) (1) Data frame sent I0422 21:16:48.390990 6 log.go:172] (0xc0017da630) (0xc001ee2500) Stream removed, broadcasting: 1 I0422 21:16:48.391023 6 log.go:172] (0xc0017da630) Go away received I0422 21:16:48.391098 6 log.go:172] (0xc0017da630) (0xc001ee2500) Stream removed, broadcasting: 1 I0422 21:16:48.391108 6 log.go:172] (0xc0017da630) (0xc001d981e0) Stream removed, broadcasting: 3 I0422 21:16:48.391114 6 log.go:172] (0xc0017da630) (0xc001ee25a0) Stream removed, broadcasting: 5 Apr 22 21:16:48.391: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:16:48.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7443" for this suite. • [SLOW TEST:26.486 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":29,"skipped":444,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:16:48.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1489 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 22 21:16:48.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1036' Apr 22 21:16:48.568: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 22 21:16:48.568: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1495 Apr 22 21:16:48.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-1036' Apr 22 21:16:48.752: INFO: stderr: "" Apr 22 21:16:48.752: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:16:48.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1036" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":30,"skipped":452,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:16:48.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container Apr 22 21:16:53.340: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4147 pod-service-account-1d92a3ce-9f1c-408c-a760-ce55d92b3afa -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Apr 22 21:16:53.557: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4147 pod-service-account-1d92a3ce-9f1c-408c-a760-ce55d92b3afa -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Apr 22 21:16:53.735: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4147 pod-service-account-1d92a3ce-9f1c-408c-a760-ce55d92b3afa -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:16:54.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4147" for this suite. • [SLOW TEST:5.361 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":31,"skipped":475,"failed":0} SSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:16:54.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1163.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1163.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1163.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1163.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 22 21:17:00.469: INFO: DNS probes using dns-test-a04d8ad7-3386-4e7e-b5ad-52a279937918 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1163.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1163.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1163.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1163.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 22 21:17:06.578: INFO: File wheezy_udp@dns-test-service-3.dns-1163.svc.cluster.local from pod dns-1163/dns-test-db5208cc-9e36-4b71-83ac-f8760bfd2267 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 22 21:17:06.582: INFO: File jessie_udp@dns-test-service-3.dns-1163.svc.cluster.local from pod dns-1163/dns-test-db5208cc-9e36-4b71-83ac-f8760bfd2267 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 22 21:17:06.582: INFO: Lookups using dns-1163/dns-test-db5208cc-9e36-4b71-83ac-f8760bfd2267 failed for: [wheezy_udp@dns-test-service-3.dns-1163.svc.cluster.local jessie_udp@dns-test-service-3.dns-1163.svc.cluster.local] Apr 22 21:17:11.587: INFO: File wheezy_udp@dns-test-service-3.dns-1163.svc.cluster.local from pod dns-1163/dns-test-db5208cc-9e36-4b71-83ac-f8760bfd2267 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 22 21:17:11.591: INFO: File jessie_udp@dns-test-service-3.dns-1163.svc.cluster.local from pod dns-1163/dns-test-db5208cc-9e36-4b71-83ac-f8760bfd2267 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 22 21:17:11.591: INFO: Lookups using dns-1163/dns-test-db5208cc-9e36-4b71-83ac-f8760bfd2267 failed for: [wheezy_udp@dns-test-service-3.dns-1163.svc.cluster.local jessie_udp@dns-test-service-3.dns-1163.svc.cluster.local] Apr 22 21:17:16.586: INFO: File wheezy_udp@dns-test-service-3.dns-1163.svc.cluster.local from pod dns-1163/dns-test-db5208cc-9e36-4b71-83ac-f8760bfd2267 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 22 21:17:16.591: INFO: File jessie_udp@dns-test-service-3.dns-1163.svc.cluster.local from pod dns-1163/dns-test-db5208cc-9e36-4b71-83ac-f8760bfd2267 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 22 21:17:16.591: INFO: Lookups using dns-1163/dns-test-db5208cc-9e36-4b71-83ac-f8760bfd2267 failed for: [wheezy_udp@dns-test-service-3.dns-1163.svc.cluster.local jessie_udp@dns-test-service-3.dns-1163.svc.cluster.local] Apr 22 21:17:21.587: INFO: File wheezy_udp@dns-test-service-3.dns-1163.svc.cluster.local from pod dns-1163/dns-test-db5208cc-9e36-4b71-83ac-f8760bfd2267 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 22 21:17:21.591: INFO: File jessie_udp@dns-test-service-3.dns-1163.svc.cluster.local from pod dns-1163/dns-test-db5208cc-9e36-4b71-83ac-f8760bfd2267 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 22 21:17:21.591: INFO: Lookups using dns-1163/dns-test-db5208cc-9e36-4b71-83ac-f8760bfd2267 failed for: [wheezy_udp@dns-test-service-3.dns-1163.svc.cluster.local jessie_udp@dns-test-service-3.dns-1163.svc.cluster.local] Apr 22 21:17:26.586: INFO: File wheezy_udp@dns-test-service-3.dns-1163.svc.cluster.local from pod dns-1163/dns-test-db5208cc-9e36-4b71-83ac-f8760bfd2267 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 22 21:17:26.590: INFO: File jessie_udp@dns-test-service-3.dns-1163.svc.cluster.local from pod dns-1163/dns-test-db5208cc-9e36-4b71-83ac-f8760bfd2267 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 22 21:17:26.590: INFO: Lookups using dns-1163/dns-test-db5208cc-9e36-4b71-83ac-f8760bfd2267 failed for: [wheezy_udp@dns-test-service-3.dns-1163.svc.cluster.local jessie_udp@dns-test-service-3.dns-1163.svc.cluster.local] Apr 22 21:17:31.590: INFO: DNS probes using dns-test-db5208cc-9e36-4b71-83ac-f8760bfd2267 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1163.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-1163.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1163.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-1163.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 22 21:17:38.158: INFO: DNS probes using dns-test-757339c1-daa9-441b-b310-4f608c751b63 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:17:38.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1163" for this suite. • [SLOW TEST:44.234 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":32,"skipped":479,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:17:38.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:17:38.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3224" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":33,"skipped":480,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:17:38.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 22 21:17:42.784: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:17:42.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3023" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":34,"skipped":525,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:17:42.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-1eff4a40-f5ee-408f-bccd-4a74b4017e2b STEP: Creating a pod to test consume secrets Apr 22 21:17:42.942: INFO: Waiting up to 5m0s for pod "pod-secrets-f5b9bf18-8c92-41d9-aa65-d4bb57ab4757" in namespace "secrets-7860" to be "success or failure" Apr 22 21:17:42.957: INFO: Pod "pod-secrets-f5b9bf18-8c92-41d9-aa65-d4bb57ab4757": Phase="Pending", Reason="", readiness=false. Elapsed: 14.801356ms Apr 22 21:17:44.985: INFO: Pod "pod-secrets-f5b9bf18-8c92-41d9-aa65-d4bb57ab4757": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042071418s Apr 22 21:17:47.003: INFO: Pod "pod-secrets-f5b9bf18-8c92-41d9-aa65-d4bb57ab4757": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060068074s STEP: Saw pod success Apr 22 21:17:47.003: INFO: Pod "pod-secrets-f5b9bf18-8c92-41d9-aa65-d4bb57ab4757" satisfied condition "success or failure" Apr 22 21:17:47.006: INFO: Trying to get logs from node jerma-worker pod pod-secrets-f5b9bf18-8c92-41d9-aa65-d4bb57ab4757 container secret-volume-test: STEP: delete the pod Apr 22 21:17:47.056: INFO: Waiting for pod pod-secrets-f5b9bf18-8c92-41d9-aa65-d4bb57ab4757 to disappear Apr 22 21:17:47.066: INFO: Pod pod-secrets-f5b9bf18-8c92-41d9-aa65-d4bb57ab4757 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:17:47.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7860" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":35,"skipped":549,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:17:47.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-34d85c37-ccd3-4e2f-ae08-af68fa7a8ab1 STEP: Creating a pod to test consume configMaps Apr 22 21:17:47.158: INFO: Waiting up to 5m0s for pod "pod-configmaps-62e4de8a-03ce-4d43-b621-dedc46a98df7" in namespace "configmap-4675" to be "success or failure" Apr 22 21:17:47.161: INFO: Pod "pod-configmaps-62e4de8a-03ce-4d43-b621-dedc46a98df7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.580016ms Apr 22 21:17:49.165: INFO: Pod "pod-configmaps-62e4de8a-03ce-4d43-b621-dedc46a98df7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007717254s Apr 22 21:17:51.189: INFO: Pod "pod-configmaps-62e4de8a-03ce-4d43-b621-dedc46a98df7": Phase="Running", Reason="", readiness=true. Elapsed: 4.031142382s Apr 22 21:17:53.192: INFO: Pod "pod-configmaps-62e4de8a-03ce-4d43-b621-dedc46a98df7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.034610106s STEP: Saw pod success Apr 22 21:17:53.192: INFO: Pod "pod-configmaps-62e4de8a-03ce-4d43-b621-dedc46a98df7" satisfied condition "success or failure" Apr 22 21:17:53.195: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-62e4de8a-03ce-4d43-b621-dedc46a98df7 container configmap-volume-test: STEP: delete the pod Apr 22 21:17:53.231: INFO: Waiting for pod pod-configmaps-62e4de8a-03ce-4d43-b621-dedc46a98df7 to disappear Apr 22 21:17:53.284: INFO: Pod pod-configmaps-62e4de8a-03ce-4d43-b621-dedc46a98df7 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:17:53.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4675" for this suite. • [SLOW TEST:6.233 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":36,"skipped":555,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:17:53.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-4efdea6f-b586-467d-abc7-4f2b7fdac4a6 STEP: Creating configMap with name cm-test-opt-upd-68d48f22-ee9e-40ad-9883-7f85c8595137 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-4efdea6f-b586-467d-abc7-4f2b7fdac4a6 STEP: Updating configmap cm-test-opt-upd-68d48f22-ee9e-40ad-9883-7f85c8595137 STEP: Creating configMap with name cm-test-opt-create-cee11cbb-eb71-49b4-a9a7-35d12e6293d8 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:19:05.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4825" for this suite. • [SLOW TEST:72.661 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":37,"skipped":566,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:19:05.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 22 21:19:06.072: INFO: Waiting up to 5m0s for pod "downwardapi-volume-63bf8be7-387c-4fad-ac60-c60f5393a021" in namespace "projected-8771" to be "success or failure" Apr 22 21:19:06.079: INFO: Pod "downwardapi-volume-63bf8be7-387c-4fad-ac60-c60f5393a021": Phase="Pending", Reason="", readiness=false. Elapsed: 7.224302ms Apr 22 21:19:08.550: INFO: Pod "downwardapi-volume-63bf8be7-387c-4fad-ac60-c60f5393a021": Phase="Pending", Reason="", readiness=false. Elapsed: 2.477623778s Apr 22 21:19:10.553: INFO: Pod "downwardapi-volume-63bf8be7-387c-4fad-ac60-c60f5393a021": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.481302528s STEP: Saw pod success Apr 22 21:19:10.553: INFO: Pod "downwardapi-volume-63bf8be7-387c-4fad-ac60-c60f5393a021" satisfied condition "success or failure" Apr 22 21:19:10.556: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-63bf8be7-387c-4fad-ac60-c60f5393a021 container client-container: STEP: delete the pod Apr 22 21:19:10.585: INFO: Waiting for pod downwardapi-volume-63bf8be7-387c-4fad-ac60-c60f5393a021 to disappear Apr 22 21:19:10.594: INFO: Pod downwardapi-volume-63bf8be7-387c-4fad-ac60-c60f5393a021 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:19:10.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8771" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":38,"skipped":575,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:19:10.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all Apr 22 21:19:10.931: INFO: Waiting up to 5m0s for pod "client-containers-ef722bdd-9078-4d0d-8955-ff309bb953bf" in namespace "containers-4583" to be "success or failure" Apr 22 21:19:10.936: INFO: Pod "client-containers-ef722bdd-9078-4d0d-8955-ff309bb953bf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.169836ms Apr 22 21:19:13.010: INFO: Pod "client-containers-ef722bdd-9078-4d0d-8955-ff309bb953bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078711779s Apr 22 21:19:15.014: INFO: Pod "client-containers-ef722bdd-9078-4d0d-8955-ff309bb953bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082675136s STEP: Saw pod success Apr 22 21:19:15.014: INFO: Pod "client-containers-ef722bdd-9078-4d0d-8955-ff309bb953bf" satisfied condition "success or failure" Apr 22 21:19:15.017: INFO: Trying to get logs from node jerma-worker pod client-containers-ef722bdd-9078-4d0d-8955-ff309bb953bf container test-container: STEP: delete the pod Apr 22 21:19:15.041: INFO: Waiting for pod client-containers-ef722bdd-9078-4d0d-8955-ff309bb953bf to disappear Apr 22 21:19:15.079: INFO: Pod client-containers-ef722bdd-9078-4d0d-8955-ff309bb953bf no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:19:15.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4583" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":39,"skipped":596,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:19:15.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 22 21:19:15.149: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a3b38fd8-f651-48da-bd3d-652404ef0bab" in namespace "downward-api-983" to be "success or failure" Apr 22 21:19:15.153: INFO: Pod "downwardapi-volume-a3b38fd8-f651-48da-bd3d-652404ef0bab": Phase="Pending", Reason="", readiness=false. Elapsed: 3.788686ms Apr 22 21:19:17.184: INFO: Pod "downwardapi-volume-a3b38fd8-f651-48da-bd3d-652404ef0bab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034802762s Apr 22 21:19:19.188: INFO: Pod "downwardapi-volume-a3b38fd8-f651-48da-bd3d-652404ef0bab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038772502s STEP: Saw pod success Apr 22 21:19:19.188: INFO: Pod "downwardapi-volume-a3b38fd8-f651-48da-bd3d-652404ef0bab" satisfied condition "success or failure" Apr 22 21:19:19.191: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-a3b38fd8-f651-48da-bd3d-652404ef0bab container client-container: STEP: delete the pod Apr 22 21:19:19.247: INFO: Waiting for pod downwardapi-volume-a3b38fd8-f651-48da-bd3d-652404ef0bab to disappear Apr 22 21:19:19.259: INFO: Pod downwardapi-volume-a3b38fd8-f651-48da-bd3d-652404ef0bab no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:19:19.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-983" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":632,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:19:19.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 22 21:19:19.347: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:19:25.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5936" for this suite. • [SLOW TEST:6.199 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":41,"skipped":640,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:19:25.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 22 21:19:25.550: INFO: Waiting up to 5m0s for pod "pod-e6364ddb-a00d-41ff-8100-0845a91ed451" in namespace "emptydir-1463" to be "success or failure" Apr 22 21:19:25.552: INFO: Pod "pod-e6364ddb-a00d-41ff-8100-0845a91ed451": Phase="Pending", Reason="", readiness=false. Elapsed: 2.792886ms Apr 22 21:19:27.557: INFO: Pod "pod-e6364ddb-a00d-41ff-8100-0845a91ed451": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00731535s Apr 22 21:19:29.561: INFO: Pod "pod-e6364ddb-a00d-41ff-8100-0845a91ed451": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011544238s STEP: Saw pod success Apr 22 21:19:29.561: INFO: Pod "pod-e6364ddb-a00d-41ff-8100-0845a91ed451" satisfied condition "success or failure" Apr 22 21:19:29.564: INFO: Trying to get logs from node jerma-worker2 pod pod-e6364ddb-a00d-41ff-8100-0845a91ed451 container test-container: STEP: delete the pod Apr 22 21:19:29.599: INFO: Waiting for pod pod-e6364ddb-a00d-41ff-8100-0845a91ed451 to disappear Apr 22 21:19:29.613: INFO: Pod pod-e6364ddb-a00d-41ff-8100-0845a91ed451 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:19:29.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1463" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":652,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:19:29.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 22 21:19:33.751: INFO: Waiting up to 5m0s for pod "client-envvars-84e795d2-4727-4228-ba2a-ea673553eb2b" in namespace "pods-6168" to be "success or failure" Apr 22 21:19:33.772: INFO: Pod "client-envvars-84e795d2-4727-4228-ba2a-ea673553eb2b": Phase="Pending", Reason="", readiness=false. Elapsed: 21.163722ms Apr 22 21:19:35.775: INFO: Pod "client-envvars-84e795d2-4727-4228-ba2a-ea673553eb2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024108555s Apr 22 21:19:37.780: INFO: Pod "client-envvars-84e795d2-4727-4228-ba2a-ea673553eb2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028359839s STEP: Saw pod success Apr 22 21:19:37.780: INFO: Pod "client-envvars-84e795d2-4727-4228-ba2a-ea673553eb2b" satisfied condition "success or failure" Apr 22 21:19:37.783: INFO: Trying to get logs from node jerma-worker pod client-envvars-84e795d2-4727-4228-ba2a-ea673553eb2b container env3cont: STEP: delete the pod Apr 22 21:19:37.801: INFO: Waiting for pod client-envvars-84e795d2-4727-4228-ba2a-ea673553eb2b to disappear Apr 22 21:19:37.805: INFO: Pod client-envvars-84e795d2-4727-4228-ba2a-ea673553eb2b no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:19:37.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6168" for this suite. • [SLOW TEST:8.192 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":43,"skipped":663,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:19:37.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 22 21:19:38.408: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 22 21:19:40.441: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723187178, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723187178, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723187178, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723187178, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 22 21:19:43.485: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 22 21:19:43.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9700-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:19:44.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3304" for this suite. STEP: Destroying namespace "webhook-3304-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.893 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":44,"skipped":673,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:19:44.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 22 21:19:44.772: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-830f3113-a8b1-4fbd-ac4d-4447cb831adb" in namespace "security-context-test-5761" to be "success or failure" Apr 22 21:19:44.776: INFO: Pod "busybox-privileged-false-830f3113-a8b1-4fbd-ac4d-4447cb831adb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.918659ms Apr 22 21:19:46.779: INFO: Pod "busybox-privileged-false-830f3113-a8b1-4fbd-ac4d-4447cb831adb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007624684s Apr 22 21:19:48.957: INFO: Pod "busybox-privileged-false-830f3113-a8b1-4fbd-ac4d-4447cb831adb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.185410985s Apr 22 21:19:48.957: INFO: Pod "busybox-privileged-false-830f3113-a8b1-4fbd-ac4d-4447cb831adb" satisfied condition "success or failure" Apr 22 21:19:48.964: INFO: Got logs for pod "busybox-privileged-false-830f3113-a8b1-4fbd-ac4d-4447cb831adb": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:19:48.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5761" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":45,"skipped":693,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:19:48.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 22 21:19:49.623: INFO: Waiting up to 5m0s for pod "pod-0ddf5ced-cf35-4f5b-add0-e49124d0643e" in namespace "emptydir-2202" to be "success or failure" Apr 22 21:19:49.638: INFO: Pod "pod-0ddf5ced-cf35-4f5b-add0-e49124d0643e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.631066ms Apr 22 21:19:51.650: INFO: Pod "pod-0ddf5ced-cf35-4f5b-add0-e49124d0643e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026968872s Apr 22 21:19:53.655: INFO: Pod "pod-0ddf5ced-cf35-4f5b-add0-e49124d0643e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031222486s STEP: Saw pod success Apr 22 21:19:53.655: INFO: Pod "pod-0ddf5ced-cf35-4f5b-add0-e49124d0643e" satisfied condition "success or failure" Apr 22 21:19:53.658: INFO: Trying to get logs from node jerma-worker2 pod pod-0ddf5ced-cf35-4f5b-add0-e49124d0643e container test-container: STEP: delete the pod Apr 22 21:19:53.676: INFO: Waiting for pod pod-0ddf5ced-cf35-4f5b-add0-e49124d0643e to disappear Apr 22 21:19:53.680: INFO: Pod pod-0ddf5ced-cf35-4f5b-add0-e49124d0643e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:19:53.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2202" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":46,"skipped":708,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:19:53.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-d68d363c-89cc-42c4-8cab-593f4074dfa0 STEP: Creating a pod to test consume secrets Apr 22 21:19:53.773: INFO: Waiting up to 5m0s for pod "pod-secrets-9fec2cf9-ab7f-43bc-8395-1c8a403357e3" in namespace "secrets-364" to be "success or failure" Apr 22 21:19:53.798: INFO: Pod "pod-secrets-9fec2cf9-ab7f-43bc-8395-1c8a403357e3": Phase="Pending", Reason="", readiness=false. Elapsed: 25.154604ms Apr 22 21:19:55.814: INFO: Pod "pod-secrets-9fec2cf9-ab7f-43bc-8395-1c8a403357e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040698053s Apr 22 21:19:57.818: INFO: Pod "pod-secrets-9fec2cf9-ab7f-43bc-8395-1c8a403357e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044967185s STEP: Saw pod success Apr 22 21:19:57.818: INFO: Pod "pod-secrets-9fec2cf9-ab7f-43bc-8395-1c8a403357e3" satisfied condition "success or failure" Apr 22 21:19:57.821: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-9fec2cf9-ab7f-43bc-8395-1c8a403357e3 container secret-volume-test: STEP: delete the pod Apr 22 21:19:57.860: INFO: Waiting for pod pod-secrets-9fec2cf9-ab7f-43bc-8395-1c8a403357e3 to disappear Apr 22 21:19:57.872: INFO: Pod pod-secrets-9fec2cf9-ab7f-43bc-8395-1c8a403357e3 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:19:57.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-364" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":730,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:19:57.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-6a3f5be4-7af9-4f9f-8374-90c814515c1c STEP: Creating a pod to test consume configMaps Apr 22 21:19:57.940: INFO: Waiting up to 5m0s for pod "pod-configmaps-a6a817ac-1892-4dd3-a4c4-3973f2f82eab" in namespace "configmap-6024" to be "success or failure" Apr 22 21:19:57.944: INFO: Pod "pod-configmaps-a6a817ac-1892-4dd3-a4c4-3973f2f82eab": Phase="Pending", Reason="", readiness=false. Elapsed: 3.195652ms Apr 22 21:19:59.949: INFO: Pod "pod-configmaps-a6a817ac-1892-4dd3-a4c4-3973f2f82eab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008241582s Apr 22 21:20:01.953: INFO: Pod "pod-configmaps-a6a817ac-1892-4dd3-a4c4-3973f2f82eab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012439075s STEP: Saw pod success Apr 22 21:20:01.953: INFO: Pod "pod-configmaps-a6a817ac-1892-4dd3-a4c4-3973f2f82eab" satisfied condition "success or failure" Apr 22 21:20:01.957: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-a6a817ac-1892-4dd3-a4c4-3973f2f82eab container configmap-volume-test: STEP: delete the pod Apr 22 21:20:02.030: INFO: Waiting for pod pod-configmaps-a6a817ac-1892-4dd3-a4c4-3973f2f82eab to disappear Apr 22 21:20:02.040: INFO: Pod pod-configmaps-a6a817ac-1892-4dd3-a4c4-3973f2f82eab no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:20:02.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6024" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":734,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:20:02.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 22 21:20:02.159: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e756e01f-9128-4e90-a945-36803a4e14e2" in namespace "downward-api-4625" to be "success or failure" Apr 22 21:20:02.165: INFO: Pod "downwardapi-volume-e756e01f-9128-4e90-a945-36803a4e14e2": Phase="Pending", Reason="", readiness=false. Elapsed: 5.689344ms Apr 22 21:20:04.169: INFO: Pod "downwardapi-volume-e756e01f-9128-4e90-a945-36803a4e14e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009800771s Apr 22 21:20:06.172: INFO: Pod "downwardapi-volume-e756e01f-9128-4e90-a945-36803a4e14e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013151264s STEP: Saw pod success Apr 22 21:20:06.173: INFO: Pod "downwardapi-volume-e756e01f-9128-4e90-a945-36803a4e14e2" satisfied condition "success or failure" Apr 22 21:20:06.175: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-e756e01f-9128-4e90-a945-36803a4e14e2 container client-container: STEP: delete the pod Apr 22 21:20:06.203: INFO: Waiting for pod downwardapi-volume-e756e01f-9128-4e90-a945-36803a4e14e2 to disappear Apr 22 21:20:06.226: INFO: Pod downwardapi-volume-e756e01f-9128-4e90-a945-36803a4e14e2 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:20:06.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4625" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":49,"skipped":737,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:20:06.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-d637b444-ef5f-4f45-9692-1c7ae1c7fe5f STEP: Creating a pod to test consume configMaps Apr 22 21:20:06.312: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a27da2be-8460-4e7f-bb1a-5e7685681830" in namespace "projected-1532" to be "success or failure" Apr 22 21:20:06.347: INFO: Pod "pod-projected-configmaps-a27da2be-8460-4e7f-bb1a-5e7685681830": Phase="Pending", Reason="", readiness=false. Elapsed: 34.337465ms Apr 22 21:20:08.349: INFO: Pod "pod-projected-configmaps-a27da2be-8460-4e7f-bb1a-5e7685681830": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037106667s Apr 22 21:20:10.353: INFO: Pod "pod-projected-configmaps-a27da2be-8460-4e7f-bb1a-5e7685681830": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040708217s STEP: Saw pod success Apr 22 21:20:10.353: INFO: Pod "pod-projected-configmaps-a27da2be-8460-4e7f-bb1a-5e7685681830" satisfied condition "success or failure" Apr 22 21:20:10.356: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-a27da2be-8460-4e7f-bb1a-5e7685681830 container projected-configmap-volume-test: STEP: delete the pod Apr 22 21:20:10.376: INFO: Waiting for pod pod-projected-configmaps-a27da2be-8460-4e7f-bb1a-5e7685681830 to disappear Apr 22 21:20:10.381: INFO: Pod pod-projected-configmaps-a27da2be-8460-4e7f-bb1a-5e7685681830 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:20:10.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1532" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":50,"skipped":750,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:20:10.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller Apr 22 21:20:10.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4044' Apr 22 21:20:10.838: INFO: stderr: "" Apr 22 21:20:10.838: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 22 21:20:10.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4044' Apr 22 21:20:10.974: INFO: stderr: "" Apr 22 21:20:10.974: INFO: stdout: "update-demo-nautilus-cjbzb update-demo-nautilus-v45hq " Apr 22 21:20:10.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cjbzb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4044' Apr 22 21:20:11.083: INFO: stderr: "" Apr 22 21:20:11.083: INFO: stdout: "" Apr 22 21:20:11.083: INFO: update-demo-nautilus-cjbzb is created but not running Apr 22 21:20:16.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4044' Apr 22 21:20:16.188: INFO: stderr: "" Apr 22 21:20:16.188: INFO: stdout: "update-demo-nautilus-cjbzb update-demo-nautilus-v45hq " Apr 22 21:20:16.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cjbzb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4044' Apr 22 21:20:16.280: INFO: stderr: "" Apr 22 21:20:16.280: INFO: stdout: "true" Apr 22 21:20:16.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cjbzb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4044' Apr 22 21:20:16.375: INFO: stderr: "" Apr 22 21:20:16.375: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 22 21:20:16.375: INFO: validating pod update-demo-nautilus-cjbzb Apr 22 21:20:16.379: INFO: got data: { "image": "nautilus.jpg" } Apr 22 21:20:16.379: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 22 21:20:16.379: INFO: update-demo-nautilus-cjbzb is verified up and running Apr 22 21:20:16.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v45hq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4044' Apr 22 21:20:16.475: INFO: stderr: "" Apr 22 21:20:16.475: INFO: stdout: "true" Apr 22 21:20:16.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v45hq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4044' Apr 22 21:20:16.572: INFO: stderr: "" Apr 22 21:20:16.572: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 22 21:20:16.572: INFO: validating pod update-demo-nautilus-v45hq Apr 22 21:20:16.576: INFO: got data: { "image": "nautilus.jpg" } Apr 22 21:20:16.576: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 22 21:20:16.576: INFO: update-demo-nautilus-v45hq is verified up and running STEP: rolling-update to new replication controller Apr 22 21:20:16.578: INFO: scanned /root for discovery docs: Apr 22 21:20:16.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-4044' Apr 22 21:20:39.147: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 22 21:20:39.147: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 22 21:20:39.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4044' Apr 22 21:20:39.256: INFO: stderr: "" Apr 22 21:20:39.256: INFO: stdout: "update-demo-kitten-bddfh update-demo-kitten-wppp9 update-demo-nautilus-v45hq " STEP: Replicas for name=update-demo: expected=2 actual=3 Apr 22 21:20:44.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4044' Apr 22 21:20:44.361: INFO: stderr: "" Apr 22 21:20:44.361: INFO: stdout: "update-demo-kitten-bddfh update-demo-kitten-wppp9 " Apr 22 21:20:44.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-bddfh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4044' Apr 22 21:20:44.454: INFO: stderr: "" Apr 22 21:20:44.454: INFO: stdout: "true" Apr 22 21:20:44.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-bddfh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4044' Apr 22 21:20:44.558: INFO: stderr: "" Apr 22 21:20:44.558: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 22 21:20:44.558: INFO: validating pod update-demo-kitten-bddfh Apr 22 21:20:44.563: INFO: got data: { "image": "kitten.jpg" } Apr 22 21:20:44.563: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 22 21:20:44.563: INFO: update-demo-kitten-bddfh is verified up and running Apr 22 21:20:44.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-wppp9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4044' Apr 22 21:20:44.649: INFO: stderr: "" Apr 22 21:20:44.649: INFO: stdout: "true" Apr 22 21:20:44.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-wppp9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4044' Apr 22 21:20:44.753: INFO: stderr: "" Apr 22 21:20:44.753: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 22 21:20:44.753: INFO: validating pod update-demo-kitten-wppp9 Apr 22 21:20:44.757: INFO: got data: { "image": "kitten.jpg" } Apr 22 21:20:44.757: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 22 21:20:44.757: INFO: update-demo-kitten-wppp9 is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:20:44.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4044" for this suite. • [SLOW TEST:34.292 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":51,"skipped":756,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:20:44.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions Apr 22 21:20:44.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Apr 22 21:20:44.994: INFO: stderr: "" Apr 22 21:20:44.994: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:20:44.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6195" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":52,"skipped":766,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:20:45.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-c6d6ad74-4ba0-404d-a955-eebc39c42c9b STEP: Creating secret with name s-test-opt-upd-194936e8-b2d8-46a2-95dc-ddaf857b382f STEP: Creating the pod STEP: Deleting secret s-test-opt-del-c6d6ad74-4ba0-404d-a955-eebc39c42c9b STEP: Updating secret s-test-opt-upd-194936e8-b2d8-46a2-95dc-ddaf857b382f STEP: Creating secret with name s-test-opt-create-5b0c25d9-52c6-4d49-a715-bf6fd9e91586 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:20:55.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4235" for this suite. • [SLOW TEST:10.224 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":773,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:20:55.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 22 21:20:55.307: INFO: Waiting up to 5m0s for pod "downwardapi-volume-90a98bdf-149c-4300-91cf-b438e5ef5550" in namespace "downward-api-8554" to be "success or failure" Apr 22 21:20:55.320: INFO: Pod "downwardapi-volume-90a98bdf-149c-4300-91cf-b438e5ef5550": Phase="Pending", Reason="", readiness=false. Elapsed: 12.72282ms Apr 22 21:20:57.324: INFO: Pod "downwardapi-volume-90a98bdf-149c-4300-91cf-b438e5ef5550": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01712112s Apr 22 21:20:59.341: INFO: Pod "downwardapi-volume-90a98bdf-149c-4300-91cf-b438e5ef5550": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03393785s STEP: Saw pod success Apr 22 21:20:59.341: INFO: Pod "downwardapi-volume-90a98bdf-149c-4300-91cf-b438e5ef5550" satisfied condition "success or failure" Apr 22 21:20:59.344: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-90a98bdf-149c-4300-91cf-b438e5ef5550 container client-container: STEP: delete the pod Apr 22 21:20:59.360: INFO: Waiting for pod downwardapi-volume-90a98bdf-149c-4300-91cf-b438e5ef5550 to disappear Apr 22 21:20:59.380: INFO: Pod downwardapi-volume-90a98bdf-149c-4300-91cf-b438e5ef5550 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:20:59.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8554" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":774,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:20:59.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 22 21:20:59.463: INFO: Waiting up to 5m0s for pod "downwardapi-volume-413579ea-d713-41ff-87da-3404d6e2c544" in namespace "projected-6654" to be "success or failure" Apr 22 21:20:59.473: INFO: Pod "downwardapi-volume-413579ea-d713-41ff-87da-3404d6e2c544": Phase="Pending", Reason="", readiness=false. Elapsed: 10.720769ms Apr 22 21:21:01.593: INFO: Pod "downwardapi-volume-413579ea-d713-41ff-87da-3404d6e2c544": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130302024s Apr 22 21:21:03.603: INFO: Pod "downwardapi-volume-413579ea-d713-41ff-87da-3404d6e2c544": Phase="Running", Reason="", readiness=true. Elapsed: 4.140317853s Apr 22 21:21:05.606: INFO: Pod "downwardapi-volume-413579ea-d713-41ff-87da-3404d6e2c544": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.143611659s STEP: Saw pod success Apr 22 21:21:05.606: INFO: Pod "downwardapi-volume-413579ea-d713-41ff-87da-3404d6e2c544" satisfied condition "success or failure" Apr 22 21:21:05.608: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-413579ea-d713-41ff-87da-3404d6e2c544 container client-container: STEP: delete the pod Apr 22 21:21:05.624: INFO: Waiting for pod downwardapi-volume-413579ea-d713-41ff-87da-3404d6e2c544 to disappear Apr 22 21:21:05.629: INFO: Pod downwardapi-volume-413579ea-d713-41ff-87da-3404d6e2c544 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:21:05.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6654" for this suite. • [SLOW TEST:6.248 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":55,"skipped":786,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:21:05.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1585 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 22 21:21:05.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-1107' Apr 22 21:21:07.751: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 22 21:21:07.751: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created Apr 22 21:21:07.755: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Apr 22 21:21:07.762: INFO: scanned /root for discovery docs: Apr 22 21:21:07.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-1107' Apr 22 21:21:23.898: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 22 21:21:23.898: INFO: stdout: "Created e2e-test-httpd-rc-0b434e810a317eaadbc1e6f8bb6b46b8\nScaling up e2e-test-httpd-rc-0b434e810a317eaadbc1e6f8bb6b46b8 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-0b434e810a317eaadbc1e6f8bb6b46b8 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-0b434e810a317eaadbc1e6f8bb6b46b8 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Apr 22 21:21:23.899: INFO: stdout: "Created e2e-test-httpd-rc-0b434e810a317eaadbc1e6f8bb6b46b8\nScaling up e2e-test-httpd-rc-0b434e810a317eaadbc1e6f8bb6b46b8 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-0b434e810a317eaadbc1e6f8bb6b46b8 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-0b434e810a317eaadbc1e6f8bb6b46b8 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Apr 22 21:21:23.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-1107' Apr 22 21:21:23.994: INFO: stderr: "" Apr 22 21:21:23.994: INFO: stdout: "e2e-test-httpd-rc-0b434e810a317eaadbc1e6f8bb6b46b8-87nkt " Apr 22 21:21:23.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-0b434e810a317eaadbc1e6f8bb6b46b8-87nkt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1107' Apr 22 21:21:24.088: INFO: stderr: "" Apr 22 21:21:24.088: INFO: stdout: "true" Apr 22 21:21:24.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-0b434e810a317eaadbc1e6f8bb6b46b8-87nkt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1107' Apr 22 21:21:24.188: INFO: stderr: "" Apr 22 21:21:24.188: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Apr 22 21:21:24.188: INFO: e2e-test-httpd-rc-0b434e810a317eaadbc1e6f8bb6b46b8-87nkt is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1591 Apr 22 21:21:24.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-1107' Apr 22 21:21:24.292: INFO: stderr: "" Apr 22 21:21:24.292: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:21:24.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1107" for this suite. • [SLOW TEST:18.702 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1580 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":56,"skipped":810,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:21:24.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 22 21:21:24.973: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 22 21:21:27.018: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723187285, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723187285, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723187285, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723187284, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 22 21:21:30.064: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:21:30.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4613" for this suite. STEP: Destroying namespace "webhook-4613-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.932 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":57,"skipped":811,"failed":0} [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:21:30.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-9404 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 22 21:21:30.332: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 22 21:21:52.495: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.150:8080/dial?request=hostname&protocol=http&host=10.244.1.16&port=8080&tries=1'] Namespace:pod-network-test-9404 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 22 21:21:52.495: INFO: >>> kubeConfig: /root/.kube/config I0422 21:21:52.529263 6 log.go:172] (0xc0011c7c30) (0xc001aafe00) Create stream I0422 21:21:52.529304 6 log.go:172] (0xc0011c7c30) (0xc001aafe00) Stream added, broadcasting: 1 I0422 21:21:52.531433 6 log.go:172] (0xc0011c7c30) Reply frame received for 1 I0422 21:21:52.531487 6 log.go:172] (0xc0011c7c30) (0xc0028a9400) Create stream I0422 21:21:52.531505 6 log.go:172] (0xc0011c7c30) (0xc0028a9400) Stream added, broadcasting: 3 I0422 21:21:52.532602 6 log.go:172] (0xc0011c7c30) Reply frame received for 3 I0422 21:21:52.532643 6 log.go:172] (0xc0011c7c30) (0xc001d270e0) Create stream I0422 21:21:52.532676 6 log.go:172] (0xc0011c7c30) (0xc001d270e0) Stream added, broadcasting: 5 I0422 21:21:52.534082 6 log.go:172] (0xc0011c7c30) Reply frame received for 5 I0422 21:21:52.627092 6 log.go:172] (0xc0011c7c30) Data frame received for 3 I0422 21:21:52.627131 6 log.go:172] (0xc0028a9400) (3) Data frame handling I0422 21:21:52.627156 6 log.go:172] (0xc0028a9400) (3) Data frame sent I0422 21:21:52.627571 6 log.go:172] (0xc0011c7c30) Data frame received for 3 I0422 21:21:52.627599 6 log.go:172] (0xc0028a9400) (3) Data frame handling I0422 21:21:52.627742 6 log.go:172] (0xc0011c7c30) Data frame received for 5 I0422 21:21:52.627771 6 log.go:172] (0xc001d270e0) (5) Data frame handling I0422 21:21:52.629570 6 log.go:172] (0xc0011c7c30) Data frame received for 1 I0422 21:21:52.629623 6 log.go:172] (0xc001aafe00) (1) Data frame handling I0422 21:21:52.629647 6 log.go:172] (0xc001aafe00) (1) Data frame sent I0422 21:21:52.629663 6 log.go:172] (0xc0011c7c30) (0xc001aafe00) Stream removed, broadcasting: 1 I0422 21:21:52.629690 6 log.go:172] (0xc0011c7c30) Go away received I0422 21:21:52.629811 6 log.go:172] (0xc0011c7c30) (0xc001aafe00) Stream removed, broadcasting: 1 I0422 21:21:52.629833 6 log.go:172] (0xc0011c7c30) (0xc0028a9400) Stream removed, broadcasting: 3 I0422 21:21:52.629860 6 log.go:172] (0xc0011c7c30) (0xc001d270e0) Stream removed, broadcasting: 5 Apr 22 21:21:52.629: INFO: Waiting for responses: map[] Apr 22 21:21:52.633: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.150:8080/dial?request=hostname&protocol=http&host=10.244.2.149&port=8080&tries=1'] Namespace:pod-network-test-9404 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 22 21:21:52.633: INFO: >>> kubeConfig: /root/.kube/config I0422 21:21:52.664520 6 log.go:172] (0xc00158c420) (0xc001d981e0) Create stream I0422 21:21:52.664559 6 log.go:172] (0xc00158c420) (0xc001d981e0) Stream added, broadcasting: 1 I0422 21:21:52.675077 6 log.go:172] (0xc00158c420) Reply frame received for 1 I0422 21:21:52.675147 6 log.go:172] (0xc00158c420) (0xc001d272c0) Create stream I0422 21:21:52.675167 6 log.go:172] (0xc00158c420) (0xc001d272c0) Stream added, broadcasting: 3 I0422 21:21:52.677058 6 log.go:172] (0xc00158c420) Reply frame received for 3 I0422 21:21:52.677233 6 log.go:172] (0xc00158c420) (0xc00230b860) Create stream I0422 21:21:52.677258 6 log.go:172] (0xc00158c420) (0xc00230b860) Stream added, broadcasting: 5 I0422 21:21:52.679525 6 log.go:172] (0xc00158c420) Reply frame received for 5 I0422 21:21:52.742475 6 log.go:172] (0xc00158c420) Data frame received for 3 I0422 21:21:52.742505 6 log.go:172] (0xc001d272c0) (3) Data frame handling I0422 21:21:52.742520 6 log.go:172] (0xc001d272c0) (3) Data frame sent I0422 21:21:52.742893 6 log.go:172] (0xc00158c420) Data frame received for 5 I0422 21:21:52.742941 6 log.go:172] (0xc00230b860) (5) Data frame handling I0422 21:21:52.743163 6 log.go:172] (0xc00158c420) Data frame received for 3 I0422 21:21:52.743175 6 log.go:172] (0xc001d272c0) (3) Data frame handling I0422 21:21:52.744397 6 log.go:172] (0xc00158c420) Data frame received for 1 I0422 21:21:52.744468 6 log.go:172] (0xc001d981e0) (1) Data frame handling I0422 21:21:52.744505 6 log.go:172] (0xc001d981e0) (1) Data frame sent I0422 21:21:52.744546 6 log.go:172] (0xc00158c420) (0xc001d981e0) Stream removed, broadcasting: 1 I0422 21:21:52.744609 6 log.go:172] (0xc00158c420) Go away received I0422 21:21:52.744694 6 log.go:172] (0xc00158c420) (0xc001d981e0) Stream removed, broadcasting: 1 I0422 21:21:52.744725 6 log.go:172] (0xc00158c420) (0xc001d272c0) Stream removed, broadcasting: 3 I0422 21:21:52.744745 6 log.go:172] (0xc00158c420) (0xc00230b860) Stream removed, broadcasting: 5 Apr 22 21:21:52.744: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:21:52.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9404" for this suite. • [SLOW TEST:22.483 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":58,"skipped":811,"failed":0} SSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:21:52.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-7941.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-7941.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-7941.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-7941.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7941.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-7941.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-7941.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-7941.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-7941.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7941.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 22 21:21:58.885: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:21:58.888: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:21:58.891: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:21:58.893: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:21:58.901: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:21:58.904: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:21:58.907: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:21:58.909: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:21:58.914: INFO: Lookups using dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7941.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7941.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local jessie_udp@dns-test-service-2.dns-7941.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7941.svc.cluster.local] Apr 22 21:22:03.918: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:22:03.922: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:22:03.925: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:22:03.928: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:22:03.938: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:22:03.941: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:22:03.944: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:22:03.948: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:22:03.954: INFO: Lookups using dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7941.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7941.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local jessie_udp@dns-test-service-2.dns-7941.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7941.svc.cluster.local] Apr 22 21:22:08.919: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:22:08.923: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:22:08.927: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:22:08.931: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:22:08.940: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:22:08.942: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:22:08.945: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:22:08.948: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:22:08.954: INFO: Lookups using dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7941.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7941.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local jessie_udp@dns-test-service-2.dns-7941.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7941.svc.cluster.local] Apr 22 21:22:13.919: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:22:13.923: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:22:13.927: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:22:13.930: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:22:13.940: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:22:13.944: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:22:13.947: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:22:13.949: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:22:13.956: INFO: Lookups using dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7941.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7941.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local jessie_udp@dns-test-service-2.dns-7941.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7941.svc.cluster.local] Apr 22 21:22:18.919: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:22:18.922: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:22:18.926: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:22:18.929: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:22:18.938: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:22:18.941: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:22:18.943: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:22:18.946: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:22:18.952: INFO: Lookups using dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7941.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7941.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local jessie_udp@dns-test-service-2.dns-7941.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7941.svc.cluster.local] Apr 22 21:22:23.919: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:22:23.922: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:22:23.925: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:22:23.928: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:22:23.938: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:22:23.941: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:22:23.944: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:22:23.947: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7941.svc.cluster.local from pod dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf: the server could not find the requested resource (get pods dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf) Apr 22 21:22:23.954: INFO: Lookups using dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7941.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7941.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local jessie_udp@dns-test-service-2.dns-7941.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7941.svc.cluster.local] Apr 22 21:22:28.954: INFO: DNS probes using dns-7941/dns-test-ec6cf7de-4793-42be-bf67-39397f54c7cf succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:22:29.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7941" for this suite. • [SLOW TEST:36.956 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":59,"skipped":817,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:22:29.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1525 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 22 21:22:29.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-8739' Apr 22 21:22:29.986: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 22 21:22:29.986: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Apr 22 21:22:30.015: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-mbvp2] Apr 22 21:22:30.015: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-mbvp2" in namespace "kubectl-8739" to be "running and ready" Apr 22 21:22:30.021: INFO: Pod "e2e-test-httpd-rc-mbvp2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.005212ms Apr 22 21:22:32.024: INFO: Pod "e2e-test-httpd-rc-mbvp2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009398289s Apr 22 21:22:34.036: INFO: Pod "e2e-test-httpd-rc-mbvp2": Phase="Running", Reason="", readiness=true. Elapsed: 4.021284649s Apr 22 21:22:34.036: INFO: Pod "e2e-test-httpd-rc-mbvp2" satisfied condition "running and ready" Apr 22 21:22:34.036: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-mbvp2] Apr 22 21:22:34.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-8739' Apr 22 21:22:34.159: INFO: stderr: "" Apr 22 21:22:34.159: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.18. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.18. Set the 'ServerName' directive globally to suppress this message\n[Wed Apr 22 21:22:32.699089 2020] [mpm_event:notice] [pid 1:tid 140030234934120] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Wed Apr 22 21:22:32.699149 2020] [core:notice] [pid 1:tid 140030234934120] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1530 Apr 22 21:22:34.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-8739' Apr 22 21:22:34.263: INFO: stderr: "" Apr 22 21:22:34.263: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:22:34.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8739" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":60,"skipped":820,"failed":0} SSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:22:34.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0422 21:22:46.021004 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 22 21:22:46.021: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:22:46.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5269" for this suite. • [SLOW TEST:11.907 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":61,"skipped":823,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:22:46.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-9e3fac59-39ff-4691-8566-577a060bef52 STEP: Creating configMap with name cm-test-opt-upd-efdca8ba-6453-4e8e-a73a-94eab278ac26 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-9e3fac59-39ff-4691-8566-577a060bef52 STEP: Updating configmap cm-test-opt-upd-efdca8ba-6453-4e8e-a73a-94eab278ac26 STEP: Creating configMap with name cm-test-opt-create-98a4164a-f78f-4d9c-bfaa-cd433185909a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:22:54.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4896" for this suite. • [SLOW TEST:8.738 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":833,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:22:54.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-528z STEP: Creating a pod to test atomic-volume-subpath Apr 22 21:22:54.987: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-528z" in namespace "subpath-3199" to be "success or failure" Apr 22 21:22:55.004: INFO: Pod "pod-subpath-test-secret-528z": Phase="Pending", Reason="", readiness=false. Elapsed: 16.613097ms Apr 22 21:22:57.010: INFO: Pod "pod-subpath-test-secret-528z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022721452s Apr 22 21:22:59.014: INFO: Pod "pod-subpath-test-secret-528z": Phase="Running", Reason="", readiness=true. Elapsed: 4.026816298s Apr 22 21:23:01.018: INFO: Pod "pod-subpath-test-secret-528z": Phase="Running", Reason="", readiness=true. Elapsed: 6.030505376s Apr 22 21:23:03.022: INFO: Pod "pod-subpath-test-secret-528z": Phase="Running", Reason="", readiness=true. Elapsed: 8.03458449s Apr 22 21:23:05.027: INFO: Pod "pod-subpath-test-secret-528z": Phase="Running", Reason="", readiness=true. Elapsed: 10.039264819s Apr 22 21:23:07.030: INFO: Pod "pod-subpath-test-secret-528z": Phase="Running", Reason="", readiness=true. Elapsed: 12.042325069s Apr 22 21:23:09.034: INFO: Pod "pod-subpath-test-secret-528z": Phase="Running", Reason="", readiness=true. Elapsed: 14.046982627s Apr 22 21:23:11.039: INFO: Pod "pod-subpath-test-secret-528z": Phase="Running", Reason="", readiness=true. Elapsed: 16.051320953s Apr 22 21:23:13.042: INFO: Pod "pod-subpath-test-secret-528z": Phase="Running", Reason="", readiness=true. Elapsed: 18.054911087s Apr 22 21:23:15.063: INFO: Pod "pod-subpath-test-secret-528z": Phase="Running", Reason="", readiness=true. Elapsed: 20.075480341s Apr 22 21:23:17.067: INFO: Pod "pod-subpath-test-secret-528z": Phase="Running", Reason="", readiness=true. Elapsed: 22.079173054s Apr 22 21:23:19.071: INFO: Pod "pod-subpath-test-secret-528z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.083204867s STEP: Saw pod success Apr 22 21:23:19.071: INFO: Pod "pod-subpath-test-secret-528z" satisfied condition "success or failure" Apr 22 21:23:19.073: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-secret-528z container test-container-subpath-secret-528z: STEP: delete the pod Apr 22 21:23:19.101: INFO: Waiting for pod pod-subpath-test-secret-528z to disappear Apr 22 21:23:19.128: INFO: Pod pod-subpath-test-secret-528z no longer exists STEP: Deleting pod pod-subpath-test-secret-528z Apr 22 21:23:19.128: INFO: Deleting pod "pod-subpath-test-secret-528z" in namespace "subpath-3199" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:23:19.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3199" for this suite. • [SLOW TEST:24.225 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":63,"skipped":848,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:23:19.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 22 21:23:19.210: INFO: Waiting up to 5m0s for pod "pod-5e38b200-d59f-421c-a2b3-56211fcee124" in namespace "emptydir-1702" to be "success or failure" Apr 22 21:23:19.225: INFO: Pod "pod-5e38b200-d59f-421c-a2b3-56211fcee124": Phase="Pending", Reason="", readiness=false. Elapsed: 15.797837ms Apr 22 21:23:21.229: INFO: Pod "pod-5e38b200-d59f-421c-a2b3-56211fcee124": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019337279s Apr 22 21:23:23.232: INFO: Pod "pod-5e38b200-d59f-421c-a2b3-56211fcee124": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022099414s STEP: Saw pod success Apr 22 21:23:23.232: INFO: Pod "pod-5e38b200-d59f-421c-a2b3-56211fcee124" satisfied condition "success or failure" Apr 22 21:23:23.238: INFO: Trying to get logs from node jerma-worker pod pod-5e38b200-d59f-421c-a2b3-56211fcee124 container test-container: STEP: delete the pod Apr 22 21:23:23.295: INFO: Waiting for pod pod-5e38b200-d59f-421c-a2b3-56211fcee124 to disappear Apr 22 21:23:23.304: INFO: Pod pod-5e38b200-d59f-421c-a2b3-56211fcee124 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:23:23.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1702" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":64,"skipped":854,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:23:23.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:23:23.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-9545" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":65,"skipped":875,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:23:23.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Apr 22 21:23:23.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3479' Apr 22 21:23:23.701: INFO: stderr: "" Apr 22 21:23:23.701: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 22 21:23:24.706: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 21:23:24.706: INFO: Found 0 / 1 Apr 22 21:23:25.717: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 21:23:25.718: INFO: Found 0 / 1 Apr 22 21:23:26.707: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 21:23:26.707: INFO: Found 1 / 1 Apr 22 21:23:26.707: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Apr 22 21:23:26.710: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 21:23:26.710: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 22 21:23:26.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-jljr9 --namespace=kubectl-3479 -p {"metadata":{"annotations":{"x":"y"}}}' Apr 22 21:23:26.808: INFO: stderr: "" Apr 22 21:23:26.808: INFO: stdout: "pod/agnhost-master-jljr9 patched\n" STEP: checking annotations Apr 22 21:23:26.832: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 21:23:26.833: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:23:26.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3479" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":66,"skipped":876,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:23:26.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 22 21:23:27.347: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 22 21:23:29.358: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723187407, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723187407, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723187407, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723187407, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 22 21:23:32.420: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 22 21:23:33.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-932-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:23:34.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2559" for this suite. STEP: Destroying namespace "webhook-2559-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.135 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":67,"skipped":882,"failed":0} SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:23:34.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 22 21:23:35.074: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 22 21:23:35.095: INFO: Waiting for terminating namespaces to be deleted... Apr 22 21:23:35.097: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 22 21:23:35.102: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 22 21:23:35.102: INFO: Container kube-proxy ready: true, restart count 0 Apr 22 21:23:35.102: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 22 21:23:35.102: INFO: Container kindnet-cni ready: true, restart count 0 Apr 22 21:23:35.103: INFO: agnhost-master-jljr9 from kubectl-3479 started at 2020-04-22 21:23:23 +0000 UTC (1 container statuses recorded) Apr 22 21:23:35.103: INFO: Container agnhost-master ready: true, restart count 0 Apr 22 21:23:35.103: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 22 21:23:35.108: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 22 21:23:35.108: INFO: Container kube-hunter ready: false, restart count 0 Apr 22 21:23:35.108: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 22 21:23:35.108: INFO: Container kindnet-cni ready: true, restart count 0 Apr 22 21:23:35.108: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 22 21:23:35.108: INFO: Container kube-bench ready: false, restart count 0 Apr 22 21:23:35.108: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 22 21:23:35.108: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-621d5917-4d50-4e65-973d-134de960e982 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-621d5917-4d50-4e65-973d-134de960e982 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-621d5917-4d50-4e65-973d-134de960e982 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:23:51.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8232" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:16.484 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":68,"skipped":884,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:23:51.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-d2ee9c97-c41e-42c8-9797-14d2ed9ad39b STEP: Creating secret with name s-test-opt-upd-0a4e7f36-8cf2-44da-9a6c-a7ae3df9d096 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-d2ee9c97-c41e-42c8-9797-14d2ed9ad39b STEP: Updating secret s-test-opt-upd-0a4e7f36-8cf2-44da-9a6c-a7ae3df9d096 STEP: Creating secret with name s-test-opt-create-98b0cdf7-87b5-47c4-83cd-247e3c52e03d STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:24:59.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6940" for this suite. • [SLOW TEST:68.539 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":69,"skipped":907,"failed":0} SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:25:00.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Apr 22 21:25:00.086: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:25:07.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2103" for this suite. • [SLOW TEST:7.410 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":70,"skipped":912,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:25:07.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-6021 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Apr 22 21:25:07.531: INFO: Found 0 stateful pods, waiting for 3 Apr 22 21:25:17.536: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 22 21:25:17.536: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 22 21:25:17.536: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Apr 22 21:25:27.536: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 22 21:25:27.537: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 22 21:25:27.537: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Apr 22 21:25:27.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6021 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 22 21:25:27.783: INFO: stderr: "I0422 21:25:27.679375 1016 log.go:172] (0xc0000f6420) (0xc0008f8000) Create stream\nI0422 21:25:27.679432 1016 log.go:172] (0xc0000f6420) (0xc0008f8000) Stream added, broadcasting: 1\nI0422 21:25:27.682615 1016 log.go:172] (0xc0000f6420) Reply frame received for 1\nI0422 21:25:27.682660 1016 log.go:172] (0xc0000f6420) (0xc000916000) Create stream\nI0422 21:25:27.682679 1016 log.go:172] (0xc0000f6420) (0xc000916000) Stream added, broadcasting: 3\nI0422 21:25:27.683720 1016 log.go:172] (0xc0000f6420) Reply frame received for 3\nI0422 21:25:27.683758 1016 log.go:172] (0xc0000f6420) (0xc0005e4640) Create stream\nI0422 21:25:27.683777 1016 log.go:172] (0xc0000f6420) (0xc0005e4640) Stream added, broadcasting: 5\nI0422 21:25:27.684700 1016 log.go:172] (0xc0000f6420) Reply frame received for 5\nI0422 21:25:27.739613 1016 log.go:172] (0xc0000f6420) Data frame received for 5\nI0422 21:25:27.739642 1016 log.go:172] (0xc0005e4640) (5) Data frame handling\nI0422 21:25:27.739665 1016 log.go:172] (0xc0005e4640) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0422 21:25:27.775537 1016 log.go:172] (0xc0000f6420) Data frame received for 3\nI0422 21:25:27.775571 1016 log.go:172] (0xc000916000) (3) Data frame handling\nI0422 21:25:27.775594 1016 log.go:172] (0xc000916000) (3) Data frame sent\nI0422 21:25:27.775913 1016 log.go:172] (0xc0000f6420) Data frame received for 5\nI0422 21:25:27.775939 1016 log.go:172] (0xc0005e4640) (5) Data frame handling\nI0422 21:25:27.775976 1016 log.go:172] (0xc0000f6420) Data frame received for 3\nI0422 21:25:27.776001 1016 log.go:172] (0xc000916000) (3) Data frame handling\nI0422 21:25:27.778074 1016 log.go:172] (0xc0000f6420) Data frame received for 1\nI0422 21:25:27.778106 1016 log.go:172] (0xc0008f8000) (1) Data frame handling\nI0422 21:25:27.778127 1016 log.go:172] (0xc0008f8000) (1) Data frame sent\nI0422 21:25:27.778164 1016 log.go:172] (0xc0000f6420) (0xc0008f8000) Stream removed, broadcasting: 1\nI0422 21:25:27.778305 1016 log.go:172] (0xc0000f6420) Go away received\nI0422 21:25:27.778589 1016 log.go:172] (0xc0000f6420) (0xc0008f8000) Stream removed, broadcasting: 1\nI0422 21:25:27.778629 1016 log.go:172] (0xc0000f6420) (0xc000916000) Stream removed, broadcasting: 3\nI0422 21:25:27.778643 1016 log.go:172] (0xc0000f6420) (0xc0005e4640) Stream removed, broadcasting: 5\n" Apr 22 21:25:27.783: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 22 21:25:27.783: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 22 21:25:37.822: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Apr 22 21:25:47.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6021 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 22 21:25:48.080: INFO: stderr: "I0422 21:25:48.004250 1036 log.go:172] (0xc000a06000) (0xc000a5e000) Create stream\nI0422 21:25:48.004304 1036 log.go:172] (0xc000a06000) (0xc000a5e000) Stream added, broadcasting: 1\nI0422 21:25:48.019447 1036 log.go:172] (0xc000a06000) Reply frame received for 1\nI0422 21:25:48.021769 1036 log.go:172] (0xc000a06000) (0xc000b3c460) Create stream\nI0422 21:25:48.021785 1036 log.go:172] (0xc000a06000) (0xc000b3c460) Stream added, broadcasting: 3\nI0422 21:25:48.022642 1036 log.go:172] (0xc000a06000) Reply frame received for 3\nI0422 21:25:48.022663 1036 log.go:172] (0xc000a06000) (0xc0005de6e0) Create stream\nI0422 21:25:48.022670 1036 log.go:172] (0xc000a06000) (0xc0005de6e0) Stream added, broadcasting: 5\nI0422 21:25:48.023306 1036 log.go:172] (0xc000a06000) Reply frame received for 5\nI0422 21:25:48.073473 1036 log.go:172] (0xc000a06000) Data frame received for 5\nI0422 21:25:48.073521 1036 log.go:172] (0xc000a06000) Data frame received for 3\nI0422 21:25:48.073567 1036 log.go:172] (0xc000b3c460) (3) Data frame handling\nI0422 21:25:48.073591 1036 log.go:172] (0xc000b3c460) (3) Data frame sent\nI0422 21:25:48.073611 1036 log.go:172] (0xc000a06000) Data frame received for 3\nI0422 21:25:48.073626 1036 log.go:172] (0xc000b3c460) (3) Data frame handling\nI0422 21:25:48.073653 1036 log.go:172] (0xc0005de6e0) (5) Data frame handling\nI0422 21:25:48.073675 1036 log.go:172] (0xc0005de6e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0422 21:25:48.073696 1036 log.go:172] (0xc000a06000) Data frame received for 5\nI0422 21:25:48.073711 1036 log.go:172] (0xc0005de6e0) (5) Data frame handling\nI0422 21:25:48.075551 1036 log.go:172] (0xc000a06000) Data frame received for 1\nI0422 21:25:48.075584 1036 log.go:172] (0xc000a5e000) (1) Data frame handling\nI0422 21:25:48.075607 1036 log.go:172] (0xc000a5e000) (1) Data frame sent\nI0422 21:25:48.075667 1036 log.go:172] (0xc000a06000) (0xc000a5e000) Stream removed, broadcasting: 1\nI0422 21:25:48.075947 1036 log.go:172] (0xc000a06000) Go away received\nI0422 21:25:48.076183 1036 log.go:172] (0xc000a06000) (0xc000a5e000) Stream removed, broadcasting: 1\nI0422 21:25:48.076220 1036 log.go:172] (0xc000a06000) (0xc000b3c460) Stream removed, broadcasting: 3\nI0422 21:25:48.076234 1036 log.go:172] (0xc000a06000) (0xc0005de6e0) Stream removed, broadcasting: 5\n" Apr 22 21:25:48.080: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 22 21:25:48.080: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 22 21:26:08.101: INFO: Waiting for StatefulSet statefulset-6021/ss2 to complete update STEP: Rolling back to a previous revision Apr 22 21:26:18.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6021 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 22 21:26:18.346: INFO: stderr: "I0422 21:26:18.246601 1055 log.go:172] (0xc0009a2630) (0xc000a10000) Create stream\nI0422 21:26:18.246683 1055 log.go:172] (0xc0009a2630) (0xc000a10000) Stream added, broadcasting: 1\nI0422 21:26:18.249898 1055 log.go:172] (0xc0009a2630) Reply frame received for 1\nI0422 21:26:18.249935 1055 log.go:172] (0xc0009a2630) (0xc000609a40) Create stream\nI0422 21:26:18.249945 1055 log.go:172] (0xc0009a2630) (0xc000609a40) Stream added, broadcasting: 3\nI0422 21:26:18.250916 1055 log.go:172] (0xc0009a2630) Reply frame received for 3\nI0422 21:26:18.250941 1055 log.go:172] (0xc0009a2630) (0xc000609c20) Create stream\nI0422 21:26:18.250950 1055 log.go:172] (0xc0009a2630) (0xc000609c20) Stream added, broadcasting: 5\nI0422 21:26:18.252159 1055 log.go:172] (0xc0009a2630) Reply frame received for 5\nI0422 21:26:18.309089 1055 log.go:172] (0xc0009a2630) Data frame received for 5\nI0422 21:26:18.309123 1055 log.go:172] (0xc000609c20) (5) Data frame handling\nI0422 21:26:18.309139 1055 log.go:172] (0xc000609c20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0422 21:26:18.340569 1055 log.go:172] (0xc0009a2630) Data frame received for 5\nI0422 21:26:18.340603 1055 log.go:172] (0xc000609c20) (5) Data frame handling\nI0422 21:26:18.340625 1055 log.go:172] (0xc0009a2630) Data frame received for 3\nI0422 21:26:18.340634 1055 log.go:172] (0xc000609a40) (3) Data frame handling\nI0422 21:26:18.340644 1055 log.go:172] (0xc000609a40) (3) Data frame sent\nI0422 21:26:18.340818 1055 log.go:172] (0xc0009a2630) Data frame received for 3\nI0422 21:26:18.340849 1055 log.go:172] (0xc000609a40) (3) Data frame handling\nI0422 21:26:18.342614 1055 log.go:172] (0xc0009a2630) Data frame received for 1\nI0422 21:26:18.342631 1055 log.go:172] (0xc000a10000) (1) Data frame handling\nI0422 21:26:18.342638 1055 log.go:172] (0xc000a10000) (1) Data frame sent\nI0422 21:26:18.342658 1055 log.go:172] (0xc0009a2630) (0xc000a10000) Stream removed, broadcasting: 1\nI0422 21:26:18.342676 1055 log.go:172] (0xc0009a2630) Go away received\nI0422 21:26:18.343120 1055 log.go:172] (0xc0009a2630) (0xc000a10000) Stream removed, broadcasting: 1\nI0422 21:26:18.343147 1055 log.go:172] (0xc0009a2630) (0xc000609a40) Stream removed, broadcasting: 3\nI0422 21:26:18.343159 1055 log.go:172] (0xc0009a2630) (0xc000609c20) Stream removed, broadcasting: 5\n" Apr 22 21:26:18.347: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 22 21:26:18.347: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 22 21:26:28.384: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Apr 22 21:26:38.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6021 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 22 21:26:38.656: INFO: stderr: "I0422 21:26:38.564388 1079 log.go:172] (0xc0007f6b00) (0xc0007ec000) Create stream\nI0422 21:26:38.564462 1079 log.go:172] (0xc0007f6b00) (0xc0007ec000) Stream added, broadcasting: 1\nI0422 21:26:38.566781 1079 log.go:172] (0xc0007f6b00) Reply frame received for 1\nI0422 21:26:38.566819 1079 log.go:172] (0xc0007f6b00) (0xc000693ae0) Create stream\nI0422 21:26:38.566826 1079 log.go:172] (0xc0007f6b00) (0xc000693ae0) Stream added, broadcasting: 3\nI0422 21:26:38.567529 1079 log.go:172] (0xc0007f6b00) Reply frame received for 3\nI0422 21:26:38.567557 1079 log.go:172] (0xc0007f6b00) (0xc0007ec140) Create stream\nI0422 21:26:38.567565 1079 log.go:172] (0xc0007f6b00) (0xc0007ec140) Stream added, broadcasting: 5\nI0422 21:26:38.568193 1079 log.go:172] (0xc0007f6b00) Reply frame received for 5\nI0422 21:26:38.650663 1079 log.go:172] (0xc0007f6b00) Data frame received for 5\nI0422 21:26:38.650710 1079 log.go:172] (0xc0007ec140) (5) Data frame handling\nI0422 21:26:38.650728 1079 log.go:172] (0xc0007ec140) (5) Data frame sent\nI0422 21:26:38.650740 1079 log.go:172] (0xc0007f6b00) Data frame received for 5\nI0422 21:26:38.650762 1079 log.go:172] (0xc0007ec140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0422 21:26:38.650807 1079 log.go:172] (0xc0007f6b00) Data frame received for 3\nI0422 21:26:38.650842 1079 log.go:172] (0xc000693ae0) (3) Data frame handling\nI0422 21:26:38.650862 1079 log.go:172] (0xc000693ae0) (3) Data frame sent\nI0422 21:26:38.650878 1079 log.go:172] (0xc0007f6b00) Data frame received for 3\nI0422 21:26:38.650890 1079 log.go:172] (0xc000693ae0) (3) Data frame handling\nI0422 21:26:38.652279 1079 log.go:172] (0xc0007f6b00) Data frame received for 1\nI0422 21:26:38.652306 1079 log.go:172] (0xc0007ec000) (1) Data frame handling\nI0422 21:26:38.652326 1079 log.go:172] (0xc0007ec000) (1) Data frame sent\nI0422 21:26:38.652347 1079 log.go:172] (0xc0007f6b00) (0xc0007ec000) Stream removed, broadcasting: 1\nI0422 21:26:38.652369 1079 log.go:172] (0xc0007f6b00) Go away received\nI0422 21:26:38.652728 1079 log.go:172] (0xc0007f6b00) (0xc0007ec000) Stream removed, broadcasting: 1\nI0422 21:26:38.652750 1079 log.go:172] (0xc0007f6b00) (0xc000693ae0) Stream removed, broadcasting: 3\nI0422 21:26:38.652759 1079 log.go:172] (0xc0007f6b00) (0xc0007ec140) Stream removed, broadcasting: 5\n" Apr 22 21:26:38.657: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 22 21:26:38.657: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 22 21:26:58.678: INFO: Waiting for StatefulSet statefulset-6021/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 22 21:27:08.687: INFO: Deleting all statefulset in ns statefulset-6021 Apr 22 21:27:08.690: INFO: Scaling statefulset ss2 to 0 Apr 22 21:27:38.710: INFO: Waiting for statefulset status.replicas updated to 0 Apr 22 21:27:38.713: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:27:38.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6021" for this suite. • [SLOW TEST:151.326 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":71,"skipped":942,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:27:38.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 22 21:27:38.807: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Apr 22 21:27:41.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1652 create -f -' Apr 22 21:27:45.031: INFO: stderr: "" Apr 22 21:27:45.031: INFO: stdout: "e2e-test-crd-publish-openapi-6199-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 22 21:27:45.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1652 delete e2e-test-crd-publish-openapi-6199-crds test-foo' Apr 22 21:27:45.166: INFO: stderr: "" Apr 22 21:27:45.166: INFO: stdout: "e2e-test-crd-publish-openapi-6199-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Apr 22 21:27:45.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1652 apply -f -' Apr 22 21:27:45.472: INFO: stderr: "" Apr 22 21:27:45.472: INFO: stdout: "e2e-test-crd-publish-openapi-6199-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 22 21:27:45.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1652 delete e2e-test-crd-publish-openapi-6199-crds test-foo' Apr 22 21:27:45.591: INFO: stderr: "" Apr 22 21:27:45.591: INFO: stdout: "e2e-test-crd-publish-openapi-6199-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Apr 22 21:27:45.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1652 create -f -' Apr 22 21:27:45.852: INFO: rc: 1 Apr 22 21:27:45.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1652 apply -f -' Apr 22 21:27:46.070: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Apr 22 21:27:46.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1652 create -f -' Apr 22 21:27:46.336: INFO: rc: 1 Apr 22 21:27:46.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1652 apply -f -' Apr 22 21:27:46.597: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Apr 22 21:27:46.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6199-crds' Apr 22 21:27:46.836: INFO: stderr: "" Apr 22 21:27:46.836: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6199-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Apr 22 21:27:46.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6199-crds.metadata' Apr 22 21:27:47.127: INFO: stderr: "" Apr 22 21:27:47.127: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6199-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Apr 22 21:27:47.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6199-crds.spec' Apr 22 21:27:47.384: INFO: stderr: "" Apr 22 21:27:47.384: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6199-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Apr 22 21:27:47.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6199-crds.spec.bars' Apr 22 21:27:47.604: INFO: stderr: "" Apr 22 21:27:47.604: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6199-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Apr 22 21:27:47.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6199-crds.spec.bars2' Apr 22 21:27:47.843: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:27:49.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1652" for this suite. • [SLOW TEST:10.984 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":72,"skipped":992,"failed":0} SSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:27:49.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-6949/configmap-test-acda12ec-445e-4624-b22a-eef935812e1f STEP: Creating a pod to test consume configMaps Apr 22 21:27:49.817: INFO: Waiting up to 5m0s for pod "pod-configmaps-990b5ea4-b8eb-479d-832f-402feef47ea0" in namespace "configmap-6949" to be "success or failure" Apr 22 21:27:49.833: INFO: Pod "pod-configmaps-990b5ea4-b8eb-479d-832f-402feef47ea0": Phase="Pending", Reason="", readiness=false. Elapsed: 16.300893ms Apr 22 21:27:51.837: INFO: Pod "pod-configmaps-990b5ea4-b8eb-479d-832f-402feef47ea0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020400244s Apr 22 21:27:53.842: INFO: Pod "pod-configmaps-990b5ea4-b8eb-479d-832f-402feef47ea0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024964532s STEP: Saw pod success Apr 22 21:27:53.842: INFO: Pod "pod-configmaps-990b5ea4-b8eb-479d-832f-402feef47ea0" satisfied condition "success or failure" Apr 22 21:27:53.845: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-990b5ea4-b8eb-479d-832f-402feef47ea0 container env-test: STEP: delete the pod Apr 22 21:27:53.877: INFO: Waiting for pod pod-configmaps-990b5ea4-b8eb-479d-832f-402feef47ea0 to disappear Apr 22 21:27:53.885: INFO: Pod pod-configmaps-990b5ea4-b8eb-479d-832f-402feef47ea0 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:27:53.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6949" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":997,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:27:53.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-ea0484f4-5a26-4f38-ad72-1201b05c30a0 in namespace container-probe-3804 Apr 22 21:27:58.007: INFO: Started pod busybox-ea0484f4-5a26-4f38-ad72-1201b05c30a0 in namespace container-probe-3804 STEP: checking the pod's current state and verifying that restartCount is present Apr 22 21:27:58.010: INFO: Initial restart count of pod busybox-ea0484f4-5a26-4f38-ad72-1201b05c30a0 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:31:58.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3804" for this suite. • [SLOW TEST:244.706 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1049,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:31:58.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 22 21:31:59.776: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 22 21:32:01.787: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723187919, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723187919, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723187919, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723187919, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 22 21:32:04.821: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:32:04.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5650" for this suite. STEP: Destroying namespace "webhook-5650-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.492 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":75,"skipped":1050,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:32:05.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Apr 22 21:32:05.222: INFO: Created pod &Pod{ObjectMeta:{dns-6464 dns-6464 /api/v1/namespaces/dns-6464/pods/dns-6464 47c4cb47-1dd0-4603-81c4-6472a376b748 10223244 0 2020-04-22 21:32:05 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x8tcd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x8tcd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x8tcd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... Apr 22 21:32:09.242: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-6464 PodName:dns-6464 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 22 21:32:09.242: INFO: >>> kubeConfig: /root/.kube/config I0422 21:32:09.314722 6 log.go:172] (0xc0011c7c30) (0xc001aaef00) Create stream I0422 21:32:09.314773 6 log.go:172] (0xc0011c7c30) (0xc001aaef00) Stream added, broadcasting: 1 I0422 21:32:09.316788 6 log.go:172] (0xc0011c7c30) Reply frame received for 1 I0422 21:32:09.316838 6 log.go:172] (0xc0011c7c30) (0xc001aaf040) Create stream I0422 21:32:09.316859 6 log.go:172] (0xc0011c7c30) (0xc001aaf040) Stream added, broadcasting: 3 I0422 21:32:09.317686 6 log.go:172] (0xc0011c7c30) Reply frame received for 3 I0422 21:32:09.317722 6 log.go:172] (0xc0011c7c30) (0xc001aa7360) Create stream I0422 21:32:09.317744 6 log.go:172] (0xc0011c7c30) (0xc001aa7360) Stream added, broadcasting: 5 I0422 21:32:09.318757 6 log.go:172] (0xc0011c7c30) Reply frame received for 5 I0422 21:32:09.416359 6 log.go:172] (0xc0011c7c30) Data frame received for 3 I0422 21:32:09.416440 6 log.go:172] (0xc001aaf040) (3) Data frame handling I0422 21:32:09.416473 6 log.go:172] (0xc001aaf040) (3) Data frame sent I0422 21:32:09.417846 6 log.go:172] (0xc0011c7c30) Data frame received for 5 I0422 21:32:09.417991 6 log.go:172] (0xc001aa7360) (5) Data frame handling I0422 21:32:09.418455 6 log.go:172] (0xc0011c7c30) Data frame received for 3 I0422 21:32:09.418479 6 log.go:172] (0xc001aaf040) (3) Data frame handling I0422 21:32:09.423931 6 log.go:172] (0xc0011c7c30) Data frame received for 1 I0422 21:32:09.423947 6 log.go:172] (0xc001aaef00) (1) Data frame handling I0422 21:32:09.423960 6 log.go:172] (0xc001aaef00) (1) Data frame sent I0422 21:32:09.423985 6 log.go:172] (0xc0011c7c30) (0xc001aaef00) Stream removed, broadcasting: 1 I0422 21:32:09.424102 6 log.go:172] (0xc0011c7c30) (0xc001aaef00) Stream removed, broadcasting: 1 I0422 21:32:09.424114 6 log.go:172] (0xc0011c7c30) (0xc001aaf040) Stream removed, broadcasting: 3 I0422 21:32:09.424241 6 log.go:172] (0xc0011c7c30) (0xc001aa7360) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Apr 22 21:32:09.424: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-6464 PodName:dns-6464 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 22 21:32:09.424: INFO: >>> kubeConfig: /root/.kube/config I0422 21:32:09.447618 6 log.go:172] (0xc00158c0b0) (0xc001aa7900) Create stream I0422 21:32:09.447655 6 log.go:172] (0xc00158c0b0) (0xc001aa7900) Stream added, broadcasting: 1 I0422 21:32:09.449715 6 log.go:172] (0xc00158c0b0) Reply frame received for 1 I0422 21:32:09.449746 6 log.go:172] (0xc00158c0b0) (0xc0023da0a0) Create stream I0422 21:32:09.449759 6 log.go:172] (0xc00158c0b0) (0xc0023da0a0) Stream added, broadcasting: 3 I0422 21:32:09.450605 6 log.go:172] (0xc00158c0b0) Reply frame received for 3 I0422 21:32:09.450652 6 log.go:172] (0xc00158c0b0) (0xc001ee3f40) Create stream I0422 21:32:09.450671 6 log.go:172] (0xc00158c0b0) (0xc001ee3f40) Stream added, broadcasting: 5 I0422 21:32:09.451548 6 log.go:172] (0xc00158c0b0) Reply frame received for 5 I0422 21:32:09.518254 6 log.go:172] (0xc00158c0b0) Data frame received for 3 I0422 21:32:09.518286 6 log.go:172] (0xc0023da0a0) (3) Data frame handling I0422 21:32:09.518309 6 log.go:172] (0xc0023da0a0) (3) Data frame sent I0422 21:32:09.519709 6 log.go:172] (0xc00158c0b0) Data frame received for 3 I0422 21:32:09.519738 6 log.go:172] (0xc0023da0a0) (3) Data frame handling I0422 21:32:09.519922 6 log.go:172] (0xc00158c0b0) Data frame received for 5 I0422 21:32:09.519953 6 log.go:172] (0xc001ee3f40) (5) Data frame handling I0422 21:32:09.521745 6 log.go:172] (0xc00158c0b0) Data frame received for 1 I0422 21:32:09.521822 6 log.go:172] (0xc001aa7900) (1) Data frame handling I0422 21:32:09.521861 6 log.go:172] (0xc001aa7900) (1) Data frame sent I0422 21:32:09.521887 6 log.go:172] (0xc00158c0b0) (0xc001aa7900) Stream removed, broadcasting: 1 I0422 21:32:09.521910 6 log.go:172] (0xc00158c0b0) Go away received I0422 21:32:09.522076 6 log.go:172] (0xc00158c0b0) (0xc001aa7900) Stream removed, broadcasting: 1 I0422 21:32:09.522111 6 log.go:172] (0xc00158c0b0) (0xc0023da0a0) Stream removed, broadcasting: 3 I0422 21:32:09.522136 6 log.go:172] (0xc00158c0b0) (0xc001ee3f40) Stream removed, broadcasting: 5 Apr 22 21:32:09.522: INFO: Deleting pod dns-6464... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:32:09.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6464" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":76,"skipped":1061,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:32:09.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:32:20.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2226" for this suite. • [SLOW TEST:11.227 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":77,"skipped":1063,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:32:20.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Apr 22 21:32:20.885: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Apr 22 21:32:31.401: INFO: >>> kubeConfig: /root/.kube/config Apr 22 21:32:34.322: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:32:43.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5698" for this suite. • [SLOW TEST:23.038 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":78,"skipped":1096,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:32:43.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-d5e52999-f3a5-4c94-9022-ee0b4d738a26 STEP: Creating a pod to test consume configMaps Apr 22 21:32:43.929: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-10ae7aa3-9f4f-43cc-b2e7-afbe0f03f60c" in namespace "projected-1019" to be "success or failure" Apr 22 21:32:43.934: INFO: Pod "pod-projected-configmaps-10ae7aa3-9f4f-43cc-b2e7-afbe0f03f60c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.372086ms Apr 22 21:32:45.938: INFO: Pod "pod-projected-configmaps-10ae7aa3-9f4f-43cc-b2e7-afbe0f03f60c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008815878s Apr 22 21:32:47.943: INFO: Pod "pod-projected-configmaps-10ae7aa3-9f4f-43cc-b2e7-afbe0f03f60c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01336627s STEP: Saw pod success Apr 22 21:32:47.943: INFO: Pod "pod-projected-configmaps-10ae7aa3-9f4f-43cc-b2e7-afbe0f03f60c" satisfied condition "success or failure" Apr 22 21:32:47.946: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-10ae7aa3-9f4f-43cc-b2e7-afbe0f03f60c container projected-configmap-volume-test: STEP: delete the pod Apr 22 21:32:47.990: INFO: Waiting for pod pod-projected-configmaps-10ae7aa3-9f4f-43cc-b2e7-afbe0f03f60c to disappear Apr 22 21:32:48.000: INFO: Pod pod-projected-configmaps-10ae7aa3-9f4f-43cc-b2e7-afbe0f03f60c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:32:48.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1019" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":79,"skipped":1104,"failed":0} SSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:32:48.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 22 21:32:48.074: INFO: Waiting up to 5m0s for pod "downward-api-6a668fc5-1b7a-4df1-9549-7b55b20b3915" in namespace "downward-api-8767" to be "success or failure" Apr 22 21:32:48.078: INFO: Pod "downward-api-6a668fc5-1b7a-4df1-9549-7b55b20b3915": Phase="Pending", Reason="", readiness=false. Elapsed: 3.700517ms Apr 22 21:32:50.082: INFO: Pod "downward-api-6a668fc5-1b7a-4df1-9549-7b55b20b3915": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008144409s Apr 22 21:32:52.087: INFO: Pod "downward-api-6a668fc5-1b7a-4df1-9549-7b55b20b3915": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012378237s STEP: Saw pod success Apr 22 21:32:52.087: INFO: Pod "downward-api-6a668fc5-1b7a-4df1-9549-7b55b20b3915" satisfied condition "success or failure" Apr 22 21:32:52.090: INFO: Trying to get logs from node jerma-worker2 pod downward-api-6a668fc5-1b7a-4df1-9549-7b55b20b3915 container dapi-container: STEP: delete the pod Apr 22 21:32:52.161: INFO: Waiting for pod downward-api-6a668fc5-1b7a-4df1-9549-7b55b20b3915 to disappear Apr 22 21:32:52.168: INFO: Pod downward-api-6a668fc5-1b7a-4df1-9549-7b55b20b3915 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:32:52.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8767" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":80,"skipped":1107,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:32:52.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components Apr 22 21:32:52.214: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Apr 22 21:32:52.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8630' Apr 22 21:32:52.561: INFO: stderr: "" Apr 22 21:32:52.561: INFO: stdout: "service/agnhost-slave created\n" Apr 22 21:32:52.562: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Apr 22 21:32:52.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8630' Apr 22 21:32:52.844: INFO: stderr: "" Apr 22 21:32:52.844: INFO: stdout: "service/agnhost-master created\n" Apr 22 21:32:52.844: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 22 21:32:52.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8630' Apr 22 21:32:53.183: INFO: stderr: "" Apr 22 21:32:53.183: INFO: stdout: "service/frontend created\n" Apr 22 21:32:53.183: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Apr 22 21:32:53.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8630' Apr 22 21:32:53.421: INFO: stderr: "" Apr 22 21:32:53.421: INFO: stdout: "deployment.apps/frontend created\n" Apr 22 21:32:53.421: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 22 21:32:53.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8630' Apr 22 21:32:55.729: INFO: stderr: "" Apr 22 21:32:55.729: INFO: stdout: "deployment.apps/agnhost-master created\n" Apr 22 21:32:55.729: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 22 21:32:55.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8630' Apr 22 21:32:56.265: INFO: stderr: "" Apr 22 21:32:56.265: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Apr 22 21:32:56.265: INFO: Waiting for all frontend pods to be Running. Apr 22 21:33:01.315: INFO: Waiting for frontend to serve content. Apr 22 21:33:01.327: INFO: Trying to add a new entry to the guestbook. Apr 22 21:33:01.336: INFO: Verifying that added entry can be retrieved. Apr 22 21:33:01.343: INFO: Failed to get response from guestbook. err: , response: {"data":""} STEP: using delete to clean up resources Apr 22 21:33:06.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8630' Apr 22 21:33:06.539: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 22 21:33:06.540: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Apr 22 21:33:06.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8630' Apr 22 21:33:06.726: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 22 21:33:06.726: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 22 21:33:06.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8630' Apr 22 21:33:06.872: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 22 21:33:06.872: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 22 21:33:06.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8630' Apr 22 21:33:06.983: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 22 21:33:06.983: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 22 21:33:06.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8630' Apr 22 21:33:07.103: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 22 21:33:07.103: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 22 21:33:07.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8630' Apr 22 21:33:07.217: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 22 21:33:07.217: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:33:07.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8630" for this suite. • [SLOW TEST:15.048 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:380 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":81,"skipped":1143,"failed":0} SS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:33:07.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 22 21:33:07.368: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 22 21:33:12.371: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 22 21:33:12.371: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 22 21:33:14.375: INFO: Creating deployment "test-rollover-deployment" Apr 22 21:33:14.385: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 22 21:33:16.392: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 22 21:33:16.400: INFO: Ensure that both replica sets have 1 created replica Apr 22 21:33:16.405: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 22 21:33:16.409: INFO: Updating deployment test-rollover-deployment Apr 22 21:33:16.409: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 22 21:33:18.420: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 22 21:33:18.426: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 22 21:33:18.431: INFO: all replica sets need to contain the pod-template-hash label Apr 22 21:33:18.431: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723187994, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723187994, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723187996, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723187994, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 21:33:20.440: INFO: all replica sets need to contain the pod-template-hash label Apr 22 21:33:20.440: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723187994, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723187994, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723187999, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723187994, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 21:33:22.440: INFO: all replica sets need to contain the pod-template-hash label Apr 22 21:33:22.440: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723187994, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723187994, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723187999, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723187994, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 21:33:24.439: INFO: all replica sets need to contain the pod-template-hash label Apr 22 21:33:24.439: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723187994, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723187994, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723187999, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723187994, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 21:33:26.440: INFO: all replica sets need to contain the pod-template-hash label Apr 22 21:33:26.440: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723187994, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723187994, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723187999, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723187994, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 21:33:28.441: INFO: all replica sets need to contain the pod-template-hash label Apr 22 21:33:28.441: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723187994, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723187994, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723187999, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723187994, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 21:33:30.439: INFO: Apr 22 21:33:30.439: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 22 21:33:30.448: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-1687 /apis/apps/v1/namespaces/deployment-1687/deployments/test-rollover-deployment cd5dd1b5-7251-4f1f-b3c6-945a2f4cfc63 10223893 2 2020-04-22 21:33:14 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005618a78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-22 21:33:14 +0000 UTC,LastTransitionTime:2020-04-22 21:33:14 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-04-22 21:33:29 +0000 UTC,LastTransitionTime:2020-04-22 21:33:14 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 22 21:33:30.451: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-1687 /apis/apps/v1/namespaces/deployment-1687/replicasets/test-rollover-deployment-574d6dfbff a60b7bad-f04d-412b-9059-c7fc35206eb6 10223883 2 2020-04-22 21:33:16 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment cd5dd1b5-7251-4f1f-b3c6-945a2f4cfc63 0xc00556e1e7 0xc00556e1e8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00556e258 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 22 21:33:30.451: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 22 21:33:30.451: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-1687 /apis/apps/v1/namespaces/deployment-1687/replicasets/test-rollover-controller b7d3a160-4a68-4f87-9824-b51690d5396b 10223892 2 2020-04-22 21:33:07 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment cd5dd1b5-7251-4f1f-b3c6-945a2f4cfc63 0xc00556e107 0xc00556e108}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00556e178 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 22 21:33:30.451: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-1687 /apis/apps/v1/namespaces/deployment-1687/replicasets/test-rollover-deployment-f6c94f66c edda1d4f-d52a-4757-bd52-24c761e28b0b 10223828 2 2020-04-22 21:33:14 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment cd5dd1b5-7251-4f1f-b3c6-945a2f4cfc63 0xc00556e2c0 0xc00556e2c1}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00556e338 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 22 21:33:30.453: INFO: Pod "test-rollover-deployment-574d6dfbff-fm5lf" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-fm5lf test-rollover-deployment-574d6dfbff- deployment-1687 /api/v1/namespaces/deployment-1687/pods/test-rollover-deployment-574d6dfbff-fm5lf 46b9b2a2-4e0d-494e-992d-1537d2d8699f 10223843 0 2020-04-22 21:33:16 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff a60b7bad-f04d-412b-9059-c7fc35206eb6 0xc005618e07 0xc005618e08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2rrvl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2rrvl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2rrvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 21:33:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 21:33:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 21:33:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 21:33:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.170,StartTime:2020-04-22 21:33:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-22 21:33:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://a5ceb967b502002c7e49d4b1438d0e131c599c8b7ed9e1eb30ceb76ea9b78b59,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.170,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:33:30.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1687" for this suite. • [SLOW TEST:23.236 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":82,"skipped":1145,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:33:30.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-1bdcc37f-93ea-4c77-b281-3513777c9945 STEP: Creating a pod to test consume secrets Apr 22 21:33:30.538: INFO: Waiting up to 5m0s for pod "pod-secrets-d960b115-b26f-4d23-be4f-f62e82e9257b" in namespace "secrets-553" to be "success or failure" Apr 22 21:33:30.558: INFO: Pod "pod-secrets-d960b115-b26f-4d23-be4f-f62e82e9257b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.889215ms Apr 22 21:33:32.561: INFO: Pod "pod-secrets-d960b115-b26f-4d23-be4f-f62e82e9257b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023485966s Apr 22 21:33:34.564: INFO: Pod "pod-secrets-d960b115-b26f-4d23-be4f-f62e82e9257b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026051613s STEP: Saw pod success Apr 22 21:33:34.564: INFO: Pod "pod-secrets-d960b115-b26f-4d23-be4f-f62e82e9257b" satisfied condition "success or failure" Apr 22 21:33:34.566: INFO: Trying to get logs from node jerma-worker pod pod-secrets-d960b115-b26f-4d23-be4f-f62e82e9257b container secret-volume-test: STEP: delete the pod Apr 22 21:33:34.613: INFO: Waiting for pod pod-secrets-d960b115-b26f-4d23-be4f-f62e82e9257b to disappear Apr 22 21:33:34.626: INFO: Pod pod-secrets-d960b115-b26f-4d23-be4f-f62e82e9257b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:33:34.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-553" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":83,"skipped":1159,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:33:34.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-2433 STEP: creating replication controller nodeport-test in namespace services-2433 I0422 21:33:34.812474 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-2433, replica count: 2 I0422 21:33:37.862894 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 21:33:40.863153 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 22 21:33:40.863: INFO: Creating new exec pod Apr 22 21:33:45.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2433 execpodtnrtn -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Apr 22 21:33:46.164: INFO: stderr: "I0422 21:33:46.069291 1632 log.go:172] (0xc0008fc630) (0xc000a42000) Create stream\nI0422 21:33:46.069363 1632 log.go:172] (0xc0008fc630) (0xc000a42000) Stream added, broadcasting: 1\nI0422 21:33:46.072181 1632 log.go:172] (0xc0008fc630) Reply frame received for 1\nI0422 21:33:46.072230 1632 log.go:172] (0xc0008fc630) (0xc0006a99a0) Create stream\nI0422 21:33:46.072252 1632 log.go:172] (0xc0008fc630) (0xc0006a99a0) Stream added, broadcasting: 3\nI0422 21:33:46.073420 1632 log.go:172] (0xc0008fc630) Reply frame received for 3\nI0422 21:33:46.073447 1632 log.go:172] (0xc0008fc630) (0xc000a420a0) Create stream\nI0422 21:33:46.073457 1632 log.go:172] (0xc0008fc630) (0xc000a420a0) Stream added, broadcasting: 5\nI0422 21:33:46.074470 1632 log.go:172] (0xc0008fc630) Reply frame received for 5\nI0422 21:33:46.158184 1632 log.go:172] (0xc0008fc630) Data frame received for 5\nI0422 21:33:46.158231 1632 log.go:172] (0xc000a420a0) (5) Data frame handling\nI0422 21:33:46.158265 1632 log.go:172] (0xc000a420a0) (5) Data frame sent\nI0422 21:33:46.158287 1632 log.go:172] (0xc0008fc630) Data frame received for 5\nI0422 21:33:46.158296 1632 log.go:172] (0xc000a420a0) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0422 21:33:46.158474 1632 log.go:172] (0xc000a420a0) (5) Data frame sent\nI0422 21:33:46.158639 1632 log.go:172] (0xc0008fc630) Data frame received for 5\nI0422 21:33:46.158666 1632 log.go:172] (0xc000a420a0) (5) Data frame handling\nI0422 21:33:46.158788 1632 log.go:172] (0xc0008fc630) Data frame received for 3\nI0422 21:33:46.158808 1632 log.go:172] (0xc0006a99a0) (3) Data frame handling\nI0422 21:33:46.159934 1632 log.go:172] (0xc0008fc630) Data frame received for 1\nI0422 21:33:46.159992 1632 log.go:172] (0xc000a42000) (1) Data frame handling\nI0422 21:33:46.160016 1632 log.go:172] (0xc000a42000) (1) Data frame sent\nI0422 21:33:46.160039 1632 log.go:172] (0xc0008fc630) (0xc000a42000) Stream removed, broadcasting: 1\nI0422 21:33:46.160063 1632 log.go:172] (0xc0008fc630) Go away received\nI0422 21:33:46.160540 1632 log.go:172] (0xc0008fc630) (0xc000a42000) Stream removed, broadcasting: 1\nI0422 21:33:46.160563 1632 log.go:172] (0xc0008fc630) (0xc0006a99a0) Stream removed, broadcasting: 3\nI0422 21:33:46.160573 1632 log.go:172] (0xc0008fc630) (0xc000a420a0) Stream removed, broadcasting: 5\n" Apr 22 21:33:46.164: INFO: stdout: "" Apr 22 21:33:46.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2433 execpodtnrtn -- /bin/sh -x -c nc -zv -t -w 2 10.99.209.107 80' Apr 22 21:33:46.384: INFO: stderr: "I0422 21:33:46.302483 1652 log.go:172] (0xc000a18580) (0xc0009140a0) Create stream\nI0422 21:33:46.302541 1652 log.go:172] (0xc000a18580) (0xc0009140a0) Stream added, broadcasting: 1\nI0422 21:33:46.304879 1652 log.go:172] (0xc000a18580) Reply frame received for 1\nI0422 21:33:46.304949 1652 log.go:172] (0xc000a18580) (0xc000914140) Create stream\nI0422 21:33:46.304970 1652 log.go:172] (0xc000a18580) (0xc000914140) Stream added, broadcasting: 3\nI0422 21:33:46.306138 1652 log.go:172] (0xc000a18580) Reply frame received for 3\nI0422 21:33:46.306175 1652 log.go:172] (0xc000a18580) (0xc000649b80) Create stream\nI0422 21:33:46.306188 1652 log.go:172] (0xc000a18580) (0xc000649b80) Stream added, broadcasting: 5\nI0422 21:33:46.307112 1652 log.go:172] (0xc000a18580) Reply frame received for 5\nI0422 21:33:46.376503 1652 log.go:172] (0xc000a18580) Data frame received for 3\nI0422 21:33:46.376555 1652 log.go:172] (0xc000914140) (3) Data frame handling\nI0422 21:33:46.376600 1652 log.go:172] (0xc000a18580) Data frame received for 5\nI0422 21:33:46.376641 1652 log.go:172] (0xc000649b80) (5) Data frame handling\nI0422 21:33:46.376671 1652 log.go:172] (0xc000649b80) (5) Data frame sent\nI0422 21:33:46.376690 1652 log.go:172] (0xc000a18580) Data frame received for 5\nI0422 21:33:46.376703 1652 log.go:172] (0xc000649b80) (5) Data frame handling\n+ nc -zv -t -w 2 10.99.209.107 80\nConnection to 10.99.209.107 80 port [tcp/http] succeeded!\nI0422 21:33:46.378472 1652 log.go:172] (0xc000a18580) Data frame received for 1\nI0422 21:33:46.378514 1652 log.go:172] (0xc0009140a0) (1) Data frame handling\nI0422 21:33:46.378543 1652 log.go:172] (0xc0009140a0) (1) Data frame sent\nI0422 21:33:46.378564 1652 log.go:172] (0xc000a18580) (0xc0009140a0) Stream removed, broadcasting: 1\nI0422 21:33:46.378602 1652 log.go:172] (0xc000a18580) Go away received\nI0422 21:33:46.379137 1652 log.go:172] (0xc000a18580) (0xc0009140a0) Stream removed, broadcasting: 1\nI0422 21:33:46.379163 1652 log.go:172] (0xc000a18580) (0xc000914140) Stream removed, broadcasting: 3\nI0422 21:33:46.379177 1652 log.go:172] (0xc000a18580) (0xc000649b80) Stream removed, broadcasting: 5\n" Apr 22 21:33:46.384: INFO: stdout: "" Apr 22 21:33:46.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2433 execpodtnrtn -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 30433' Apr 22 21:33:46.600: INFO: stderr: "I0422 21:33:46.534794 1673 log.go:172] (0xc00077c790) (0xc00095e000) Create stream\nI0422 21:33:46.534858 1673 log.go:172] (0xc00077c790) (0xc00095e000) Stream added, broadcasting: 1\nI0422 21:33:46.537647 1673 log.go:172] (0xc00077c790) Reply frame received for 1\nI0422 21:33:46.537704 1673 log.go:172] (0xc00077c790) (0xc0006a9900) Create stream\nI0422 21:33:46.537721 1673 log.go:172] (0xc00077c790) (0xc0006a9900) Stream added, broadcasting: 3\nI0422 21:33:46.538546 1673 log.go:172] (0xc00077c790) Reply frame received for 3\nI0422 21:33:46.538589 1673 log.go:172] (0xc00077c790) (0xc00095e0a0) Create stream\nI0422 21:33:46.538600 1673 log.go:172] (0xc00077c790) (0xc00095e0a0) Stream added, broadcasting: 5\nI0422 21:33:46.539388 1673 log.go:172] (0xc00077c790) Reply frame received for 5\nI0422 21:33:46.593318 1673 log.go:172] (0xc00077c790) Data frame received for 5\nI0422 21:33:46.593361 1673 log.go:172] (0xc00095e0a0) (5) Data frame handling\nI0422 21:33:46.593393 1673 log.go:172] (0xc00095e0a0) (5) Data frame sent\nI0422 21:33:46.593412 1673 log.go:172] (0xc00077c790) Data frame received for 5\nI0422 21:33:46.593425 1673 log.go:172] (0xc00095e0a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 30433\nConnection to 172.17.0.10 30433 port [tcp/30433] succeeded!\nI0422 21:33:46.593451 1673 log.go:172] (0xc00095e0a0) (5) Data frame sent\nI0422 21:33:46.593858 1673 log.go:172] (0xc00077c790) Data frame received for 3\nI0422 21:33:46.593895 1673 log.go:172] (0xc0006a9900) (3) Data frame handling\nI0422 21:33:46.593923 1673 log.go:172] (0xc00077c790) Data frame received for 5\nI0422 21:33:46.593936 1673 log.go:172] (0xc00095e0a0) (5) Data frame handling\nI0422 21:33:46.595174 1673 log.go:172] (0xc00077c790) Data frame received for 1\nI0422 21:33:46.595209 1673 log.go:172] (0xc00095e000) (1) Data frame handling\nI0422 21:33:46.595235 1673 log.go:172] (0xc00095e000) (1) Data frame sent\nI0422 21:33:46.595258 1673 log.go:172] (0xc00077c790) (0xc00095e000) Stream removed, broadcasting: 1\nI0422 21:33:46.595299 1673 log.go:172] (0xc00077c790) Go away received\nI0422 21:33:46.595666 1673 log.go:172] (0xc00077c790) (0xc00095e000) Stream removed, broadcasting: 1\nI0422 21:33:46.595685 1673 log.go:172] (0xc00077c790) (0xc0006a9900) Stream removed, broadcasting: 3\nI0422 21:33:46.595695 1673 log.go:172] (0xc00077c790) (0xc00095e0a0) Stream removed, broadcasting: 5\n" Apr 22 21:33:46.600: INFO: stdout: "" Apr 22 21:33:46.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2433 execpodtnrtn -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 30433' Apr 22 21:33:46.827: INFO: stderr: "I0422 21:33:46.746040 1693 log.go:172] (0xc000ac8580) (0xc00066fea0) Create stream\nI0422 21:33:46.746126 1693 log.go:172] (0xc000ac8580) (0xc00066fea0) Stream added, broadcasting: 1\nI0422 21:33:46.748879 1693 log.go:172] (0xc000ac8580) Reply frame received for 1\nI0422 21:33:46.748917 1693 log.go:172] (0xc000ac8580) (0xc00075f540) Create stream\nI0422 21:33:46.748926 1693 log.go:172] (0xc000ac8580) (0xc00075f540) Stream added, broadcasting: 3\nI0422 21:33:46.749966 1693 log.go:172] (0xc000ac8580) Reply frame received for 3\nI0422 21:33:46.749991 1693 log.go:172] (0xc000ac8580) (0xc00066ff40) Create stream\nI0422 21:33:46.749998 1693 log.go:172] (0xc000ac8580) (0xc00066ff40) Stream added, broadcasting: 5\nI0422 21:33:46.750787 1693 log.go:172] (0xc000ac8580) Reply frame received for 5\nI0422 21:33:46.821638 1693 log.go:172] (0xc000ac8580) Data frame received for 5\nI0422 21:33:46.821673 1693 log.go:172] (0xc00066ff40) (5) Data frame handling\nI0422 21:33:46.821683 1693 log.go:172] (0xc00066ff40) (5) Data frame sent\nI0422 21:33:46.821691 1693 log.go:172] (0xc000ac8580) Data frame received for 5\nI0422 21:33:46.821698 1693 log.go:172] (0xc00066ff40) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 30433\nConnection to 172.17.0.8 30433 port [tcp/30433] succeeded!\nI0422 21:33:46.821722 1693 log.go:172] (0xc000ac8580) Data frame received for 3\nI0422 21:33:46.821730 1693 log.go:172] (0xc00075f540) (3) Data frame handling\nI0422 21:33:46.822737 1693 log.go:172] (0xc000ac8580) Data frame received for 1\nI0422 21:33:46.822846 1693 log.go:172] (0xc00066fea0) (1) Data frame handling\nI0422 21:33:46.822911 1693 log.go:172] (0xc00066fea0) (1) Data frame sent\nI0422 21:33:46.822974 1693 log.go:172] (0xc000ac8580) (0xc00066fea0) Stream removed, broadcasting: 1\nI0422 21:33:46.823013 1693 log.go:172] (0xc000ac8580) Go away received\nI0422 21:33:46.823455 1693 log.go:172] (0xc000ac8580) (0xc00066fea0) Stream removed, broadcasting: 1\nI0422 21:33:46.823483 1693 log.go:172] (0xc000ac8580) (0xc00075f540) Stream removed, broadcasting: 3\nI0422 21:33:46.823500 1693 log.go:172] (0xc000ac8580) (0xc00066ff40) Stream removed, broadcasting: 5\n" Apr 22 21:33:46.827: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:33:46.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2433" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.200 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":84,"skipped":1195,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:33:46.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Apr 22 21:33:51.458: INFO: Successfully updated pod "annotationupdated08d6767-f4e8-44d3-9d92-659a7549dc37" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:33:53.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6362" for this suite. • [SLOW TEST:6.696 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":85,"skipped":1223,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:33:53.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-94a0ca83-6e44-41e4-931f-c2e82f4ca70e STEP: Creating the pod STEP: Updating configmap configmap-test-upd-94a0ca83-6e44-41e4-931f-c2e82f4ca70e STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:35:14.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7028" for this suite. • [SLOW TEST:80.476 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1252,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:35:14.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-9db1ef95-b21b-42cc-9fca-e4cc199af605 STEP: Creating a pod to test consume configMaps Apr 22 21:35:14.112: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cfbc3c18-8cf8-4160-8abd-3f6491b26b13" in namespace "projected-1189" to be "success or failure" Apr 22 21:35:14.133: INFO: Pod "pod-projected-configmaps-cfbc3c18-8cf8-4160-8abd-3f6491b26b13": Phase="Pending", Reason="", readiness=false. Elapsed: 20.824738ms Apr 22 21:35:16.215: INFO: Pod "pod-projected-configmaps-cfbc3c18-8cf8-4160-8abd-3f6491b26b13": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102947338s Apr 22 21:35:18.219: INFO: Pod "pod-projected-configmaps-cfbc3c18-8cf8-4160-8abd-3f6491b26b13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.107053784s STEP: Saw pod success Apr 22 21:35:18.219: INFO: Pod "pod-projected-configmaps-cfbc3c18-8cf8-4160-8abd-3f6491b26b13" satisfied condition "success or failure" Apr 22 21:35:18.222: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-cfbc3c18-8cf8-4160-8abd-3f6491b26b13 container projected-configmap-volume-test: STEP: delete the pod Apr 22 21:35:18.247: INFO: Waiting for pod pod-projected-configmaps-cfbc3c18-8cf8-4160-8abd-3f6491b26b13 to disappear Apr 22 21:35:18.250: INFO: Pod pod-projected-configmaps-cfbc3c18-8cf8-4160-8abd-3f6491b26b13 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:35:18.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1189" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":87,"skipped":1271,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:35:18.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:35:24.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3850" for this suite. • [SLOW TEST:6.209 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":88,"skipped":1310,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:35:24.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Apr 22 21:35:24.802: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 22 21:35:29.812: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:35:30.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9" for this suite. • [SLOW TEST:6.390 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":89,"skipped":1319,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:35:30.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:35:37.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6587" for this suite. STEP: Destroying namespace "nsdeletetest-9451" for this suite. Apr 22 21:35:37.275: INFO: Namespace nsdeletetest-9451 was already deleted STEP: Destroying namespace "nsdeletetest-3852" for this suite. • [SLOW TEST:6.420 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":90,"skipped":1335,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:35:37.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-a784795c-69d0-460a-89b6-2e1daead642f STEP: Creating a pod to test consume configMaps Apr 22 21:35:37.400: INFO: Waiting up to 5m0s for pod "pod-configmaps-8fda8fa9-95b8-4a08-9e09-6f79c7a1950d" in namespace "configmap-499" to be "success or failure" Apr 22 21:35:37.404: INFO: Pod "pod-configmaps-8fda8fa9-95b8-4a08-9e09-6f79c7a1950d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.286479ms Apr 22 21:35:39.407: INFO: Pod "pod-configmaps-8fda8fa9-95b8-4a08-9e09-6f79c7a1950d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006701011s Apr 22 21:35:41.412: INFO: Pod "pod-configmaps-8fda8fa9-95b8-4a08-9e09-6f79c7a1950d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01110423s STEP: Saw pod success Apr 22 21:35:41.412: INFO: Pod "pod-configmaps-8fda8fa9-95b8-4a08-9e09-6f79c7a1950d" satisfied condition "success or failure" Apr 22 21:35:41.414: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-8fda8fa9-95b8-4a08-9e09-6f79c7a1950d container configmap-volume-test: STEP: delete the pod Apr 22 21:35:41.487: INFO: Waiting for pod pod-configmaps-8fda8fa9-95b8-4a08-9e09-6f79c7a1950d to disappear Apr 22 21:35:41.491: INFO: Pod pod-configmaps-8fda8fa9-95b8-4a08-9e09-6f79c7a1950d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:35:41.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-499" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1336,"failed":0} SSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:35:41.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command Apr 22 21:35:41.600: INFO: Waiting up to 5m0s for pod "client-containers-77dfad0f-3695-46d4-ad54-a63e2c136b9f" in namespace "containers-6701" to be "success or failure" Apr 22 21:35:41.611: INFO: Pod "client-containers-77dfad0f-3695-46d4-ad54-a63e2c136b9f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.920703ms Apr 22 21:35:43.618: INFO: Pod "client-containers-77dfad0f-3695-46d4-ad54-a63e2c136b9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017256148s Apr 22 21:35:45.622: INFO: Pod "client-containers-77dfad0f-3695-46d4-ad54-a63e2c136b9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021976842s STEP: Saw pod success Apr 22 21:35:45.622: INFO: Pod "client-containers-77dfad0f-3695-46d4-ad54-a63e2c136b9f" satisfied condition "success or failure" Apr 22 21:35:45.626: INFO: Trying to get logs from node jerma-worker pod client-containers-77dfad0f-3695-46d4-ad54-a63e2c136b9f container test-container: STEP: delete the pod Apr 22 21:35:45.642: INFO: Waiting for pod client-containers-77dfad0f-3695-46d4-ad54-a63e2c136b9f to disappear Apr 22 21:35:45.647: INFO: Pod client-containers-77dfad0f-3695-46d4-ad54-a63e2c136b9f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:35:45.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6701" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1340,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:35:45.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:35:45.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2989" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":93,"skipped":1349,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:35:45.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-704b3108-e8be-4b17-b083-dedb4bae0550 STEP: Creating a pod to test consume configMaps Apr 22 21:35:45.942: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1eeb3a84-8764-44a7-b3fa-2f2c40ed210d" in namespace "projected-7214" to be "success or failure" Apr 22 21:35:46.054: INFO: Pod "pod-projected-configmaps-1eeb3a84-8764-44a7-b3fa-2f2c40ed210d": Phase="Pending", Reason="", readiness=false. Elapsed: 112.061123ms Apr 22 21:35:48.059: INFO: Pod "pod-projected-configmaps-1eeb3a84-8764-44a7-b3fa-2f2c40ed210d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116576352s Apr 22 21:35:50.063: INFO: Pod "pod-projected-configmaps-1eeb3a84-8764-44a7-b3fa-2f2c40ed210d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.120634654s STEP: Saw pod success Apr 22 21:35:50.063: INFO: Pod "pod-projected-configmaps-1eeb3a84-8764-44a7-b3fa-2f2c40ed210d" satisfied condition "success or failure" Apr 22 21:35:50.066: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-1eeb3a84-8764-44a7-b3fa-2f2c40ed210d container projected-configmap-volume-test: STEP: delete the pod Apr 22 21:35:50.085: INFO: Waiting for pod pod-projected-configmaps-1eeb3a84-8764-44a7-b3fa-2f2c40ed210d to disappear Apr 22 21:35:50.106: INFO: Pod pod-projected-configmaps-1eeb3a84-8764-44a7-b3fa-2f2c40ed210d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:35:50.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7214" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1374,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:35:50.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1754 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 22 21:35:50.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-9264' Apr 22 21:35:50.266: INFO: stderr: "" Apr 22 21:35:50.266: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1759 Apr 22 21:35:50.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-9264' Apr 22 21:35:54.565: INFO: stderr: "" Apr 22 21:35:54.565: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:35:54.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9264" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":95,"skipped":1411,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:35:54.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 22 21:35:55.582: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 22 21:35:57.664: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723188155, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723188155, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723188155, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723188155, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 22 21:36:00.715: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 22 21:36:00.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:36:01.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-1493" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.351 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":96,"skipped":1418,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:36:01.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 22 21:36:02.008: INFO: Waiting up to 5m0s for pod "downward-api-890e336d-5e37-47df-84c6-b5e68f2bad05" in namespace "downward-api-2136" to be "success or failure" Apr 22 21:36:02.028: INFO: Pod "downward-api-890e336d-5e37-47df-84c6-b5e68f2bad05": Phase="Pending", Reason="", readiness=false. Elapsed: 20.352873ms Apr 22 21:36:04.032: INFO: Pod "downward-api-890e336d-5e37-47df-84c6-b5e68f2bad05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024053519s Apr 22 21:36:06.036: INFO: Pod "downward-api-890e336d-5e37-47df-84c6-b5e68f2bad05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028341854s STEP: Saw pod success Apr 22 21:36:06.036: INFO: Pod "downward-api-890e336d-5e37-47df-84c6-b5e68f2bad05" satisfied condition "success or failure" Apr 22 21:36:06.040: INFO: Trying to get logs from node jerma-worker pod downward-api-890e336d-5e37-47df-84c6-b5e68f2bad05 container dapi-container: STEP: delete the pod Apr 22 21:36:06.057: INFO: Waiting for pod downward-api-890e336d-5e37-47df-84c6-b5e68f2bad05 to disappear Apr 22 21:36:06.061: INFO: Pod downward-api-890e336d-5e37-47df-84c6-b5e68f2bad05 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:36:06.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2136" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":97,"skipped":1454,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:36:06.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Apr 22 21:36:06.196: INFO: Waiting up to 5m0s for pod "var-expansion-790d4211-8e7f-453a-b563-7cee580aeeb8" in namespace "var-expansion-5326" to be "success or failure" Apr 22 21:36:06.199: INFO: Pod "var-expansion-790d4211-8e7f-453a-b563-7cee580aeeb8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.223553ms Apr 22 21:36:08.203: INFO: Pod "var-expansion-790d4211-8e7f-453a-b563-7cee580aeeb8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007081011s Apr 22 21:36:10.208: INFO: Pod "var-expansion-790d4211-8e7f-453a-b563-7cee580aeeb8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011500448s STEP: Saw pod success Apr 22 21:36:10.208: INFO: Pod "var-expansion-790d4211-8e7f-453a-b563-7cee580aeeb8" satisfied condition "success or failure" Apr 22 21:36:10.211: INFO: Trying to get logs from node jerma-worker pod var-expansion-790d4211-8e7f-453a-b563-7cee580aeeb8 container dapi-container: STEP: delete the pod Apr 22 21:36:10.230: INFO: Waiting for pod var-expansion-790d4211-8e7f-453a-b563-7cee580aeeb8 to disappear Apr 22 21:36:10.234: INFO: Pod var-expansion-790d4211-8e7f-453a-b563-7cee580aeeb8 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:36:10.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5326" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":98,"skipped":1473,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:36:10.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:36:14.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-342" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1489,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:36:14.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Apr 22 21:36:14.406: INFO: >>> kubeConfig: /root/.kube/config Apr 22 21:36:16.343: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:36:26.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8719" for this suite. • [SLOW TEST:12.622 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":100,"skipped":1496,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:36:26.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 22 21:36:27.577: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 22 21:36:29.586: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723188187, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723188187, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723188187, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723188187, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 21:36:31.595: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723188187, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723188187, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723188187, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723188187, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 22 21:36:34.620: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:36:35.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-414" for this suite. STEP: Destroying namespace "webhook-414-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.172 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":101,"skipped":1518,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:36:35.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode Apr 22 21:36:35.213: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6872" to be "success or failure" Apr 22 21:36:35.218: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.789109ms Apr 22 21:36:37.222: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008306618s Apr 22 21:36:39.226: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 4.012029379s Apr 22 21:36:41.229: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015896498s STEP: Saw pod success Apr 22 21:36:41.229: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Apr 22 21:36:41.232: INFO: Trying to get logs from node jerma-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Apr 22 21:36:41.261: INFO: Waiting for pod pod-host-path-test to disappear Apr 22 21:36:41.278: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:36:41.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-6872" for this suite. • [SLOW TEST:6.133 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1545,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:36:41.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 22 21:37:03.427: INFO: Container started at 2020-04-22 21:36:43 +0000 UTC, pod became ready at 2020-04-22 21:37:02 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:37:03.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9584" for this suite. • [SLOW TEST:22.149 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1558,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:37:03.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 22 21:37:03.499: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 22 21:37:08.504: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 22 21:37:08.504: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 22 21:37:08.564: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-450 /apis/apps/v1/namespaces/deployment-450/deployments/test-cleanup-deployment c3709f94-d84e-4ceb-954f-2859e3fefa06 10225279 1 2020-04-22 21:37:08 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00417b5b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Apr 22 21:37:08.608: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-450 /apis/apps/v1/namespaces/deployment-450/replicasets/test-cleanup-deployment-55ffc6b7b6 1257fbd6-46f4-4ccb-843c-98c5faa70d40 10225287 1 2020-04-22 21:37:08 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment c3709f94-d84e-4ceb-954f-2859e3fefa06 0xc0041469c7 0xc0041469c8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004146a68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 22 21:37:08.608: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Apr 22 21:37:08.608: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-450 /apis/apps/v1/namespaces/deployment-450/replicasets/test-cleanup-controller ac604bf1-ed82-4a08-8643-e11e70b25144 10225280 1 2020-04-22 21:37:03 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment c3709f94-d84e-4ceb-954f-2859e3fefa06 0xc0041468df 0xc0041468f0}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004146958 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 22 21:37:08.655: INFO: Pod "test-cleanup-controller-gf6kk" is available: &Pod{ObjectMeta:{test-cleanup-controller-gf6kk test-cleanup-controller- deployment-450 /api/v1/namespaces/deployment-450/pods/test-cleanup-controller-gf6kk d821f477-cc29-46ea-b322-b2c1edab6c26 10225270 0 2020-04-22 21:37:03 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller ac604bf1-ed82-4a08-8643-e11e70b25144 0xc00417b8c7 0xc00417b8c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mbhd4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mbhd4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mbhd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 21:37:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 21:37:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 21:37:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 21:37:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.178,StartTime:2020-04-22 21:37:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-22 21:37:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://37fc7265024d8af25238a9de0f4f90e9fc9b26ff785576422923e39b9e74411c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.178,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 21:37:08.655: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-ghfck" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-ghfck test-cleanup-deployment-55ffc6b7b6- deployment-450 /api/v1/namespaces/deployment-450/pods/test-cleanup-deployment-55ffc6b7b6-ghfck bb2e8752-f21a-4dc0-b9bc-6267d3b0a543 10225286 0 2020-04-22 21:37:08 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 1257fbd6-46f4-4ccb-843c-98c5faa70d40 0xc00417ba57 0xc00417ba58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mbhd4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mbhd4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mbhd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 21:37:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:37:08.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-450" for this suite. • [SLOW TEST:5.284 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":104,"skipped":1566,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:37:08.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 22 21:37:09.448: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 22 21:37:11.534: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723188229, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723188229, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723188229, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723188229, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 22 21:37:14.606: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:37:14.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8804" for this suite. STEP: Destroying namespace "webhook-8804-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.065 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":105,"skipped":1568,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:37:14.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Apr 22 21:37:14.865: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:37:20.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1059" for this suite. • [SLOW TEST:5.939 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":106,"skipped":1600,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:37:20.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 22 21:37:20.793: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a77054c7-6018-4683-b650-11e86a7f8a6b" in namespace "projected-2494" to be "success or failure" Apr 22 21:37:20.811: INFO: Pod "downwardapi-volume-a77054c7-6018-4683-b650-11e86a7f8a6b": Phase="Pending", Reason="", readiness=false. Elapsed: 17.632957ms Apr 22 21:37:22.833: INFO: Pod "downwardapi-volume-a77054c7-6018-4683-b650-11e86a7f8a6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040051233s Apr 22 21:37:24.838: INFO: Pod "downwardapi-volume-a77054c7-6018-4683-b650-11e86a7f8a6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044625939s STEP: Saw pod success Apr 22 21:37:24.838: INFO: Pod "downwardapi-volume-a77054c7-6018-4683-b650-11e86a7f8a6b" satisfied condition "success or failure" Apr 22 21:37:24.841: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-a77054c7-6018-4683-b650-11e86a7f8a6b container client-container: STEP: delete the pod Apr 22 21:37:24.877: INFO: Waiting for pod downwardapi-volume-a77054c7-6018-4683-b650-11e86a7f8a6b to disappear Apr 22 21:37:24.882: INFO: Pod downwardapi-volume-a77054c7-6018-4683-b650-11e86a7f8a6b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:37:24.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2494" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":107,"skipped":1603,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:37:24.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-cb007b22-00d7-4692-a9d1-86879b460e35 STEP: Creating a pod to test consume configMaps Apr 22 21:37:24.968: INFO: Waiting up to 5m0s for pod "pod-configmaps-fa47e050-66b6-479d-afa2-c6161cd54496" in namespace "configmap-7288" to be "success or failure" Apr 22 21:37:24.972: INFO: Pod "pod-configmaps-fa47e050-66b6-479d-afa2-c6161cd54496": Phase="Pending", Reason="", readiness=false. Elapsed: 4.477873ms Apr 22 21:37:27.079: INFO: Pod "pod-configmaps-fa47e050-66b6-479d-afa2-c6161cd54496": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11108275s Apr 22 21:37:29.083: INFO: Pod "pod-configmaps-fa47e050-66b6-479d-afa2-c6161cd54496": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115391367s Apr 22 21:37:31.088: INFO: Pod "pod-configmaps-fa47e050-66b6-479d-afa2-c6161cd54496": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.119731402s STEP: Saw pod success Apr 22 21:37:31.088: INFO: Pod "pod-configmaps-fa47e050-66b6-479d-afa2-c6161cd54496" satisfied condition "success or failure" Apr 22 21:37:31.091: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-fa47e050-66b6-479d-afa2-c6161cd54496 container configmap-volume-test: STEP: delete the pod Apr 22 21:37:31.120: INFO: Waiting for pod pod-configmaps-fa47e050-66b6-479d-afa2-c6161cd54496 to disappear Apr 22 21:37:31.129: INFO: Pod pod-configmaps-fa47e050-66b6-479d-afa2-c6161cd54496 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:37:31.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7288" for this suite. • [SLOW TEST:6.247 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":108,"skipped":1613,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:37:31.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 22 21:37:31.963: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 22 21:37:33.973: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723188252, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723188252, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723188252, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723188251, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 22 21:37:37.025: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 22 21:37:37.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:37:38.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-47" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.203 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":109,"skipped":1660,"failed":0} SSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:37:38.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 22 21:37:42.426: INFO: &Pod{ObjectMeta:{send-events-473e6acc-79f1-4c10-9738-469b13068e6c events-233 /api/v1/namespaces/events-233/pods/send-events-473e6acc-79f1-4c10-9738-469b13068e6c c95c88b3-ef96-4148-b80a-fde8faec9188 10225639 0 2020-04-22 21:37:38 +0000 UTC map[name:foo time:403341043] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vmjjj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vmjjj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vmjjj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 21:37:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 21:37:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 21:37:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 21:37:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.183,StartTime:2020-04-22 21:37:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-22 21:37:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://955ceaf68318abe7c4f634f994ed8cf94147db220a0819000dfae012be4b9328,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.183,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Apr 22 21:37:44.431: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 22 21:37:46.435: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:37:46.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-233" for this suite. • [SLOW TEST:8.137 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":110,"skipped":1664,"failed":0} S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:37:46.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 22 21:37:50.591: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:37:50.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2424" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":111,"skipped":1665,"failed":0} ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:37:50.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod Apr 22 21:37:54.728: INFO: Pod pod-hostip-60e0e9de-e343-4518-bb47-a93532b9fb8e has hostIP: 172.17.0.10 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:37:54.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5777" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":112,"skipped":1665,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:37:54.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 22 21:37:55.420: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 22 21:37:57.429: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723188275, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723188275, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723188275, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723188275, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 22 21:38:00.467: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 22 21:38:00.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8205-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:38:01.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1194" for this suite. STEP: Destroying namespace "webhook-1194-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.112 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":113,"skipped":1679,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:38:01.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy Apr 22 21:38:01.933: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix005374839/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:38:02.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8693" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":114,"skipped":1681,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:38:02.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 22 21:38:02.090: INFO: Waiting up to 5m0s for pod "downward-api-ac268a24-5f6b-4212-93f7-6cc0983e53ff" in namespace "downward-api-5497" to be "success or failure" Apr 22 21:38:02.105: INFO: Pod "downward-api-ac268a24-5f6b-4212-93f7-6cc0983e53ff": Phase="Pending", Reason="", readiness=false. Elapsed: 15.44643ms Apr 22 21:38:04.109: INFO: Pod "downward-api-ac268a24-5f6b-4212-93f7-6cc0983e53ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01911797s Apr 22 21:38:06.113: INFO: Pod "downward-api-ac268a24-5f6b-4212-93f7-6cc0983e53ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023209979s STEP: Saw pod success Apr 22 21:38:06.113: INFO: Pod "downward-api-ac268a24-5f6b-4212-93f7-6cc0983e53ff" satisfied condition "success or failure" Apr 22 21:38:06.116: INFO: Trying to get logs from node jerma-worker2 pod downward-api-ac268a24-5f6b-4212-93f7-6cc0983e53ff container dapi-container: STEP: delete the pod Apr 22 21:38:06.137: INFO: Waiting for pod downward-api-ac268a24-5f6b-4212-93f7-6cc0983e53ff to disappear Apr 22 21:38:06.142: INFO: Pod downward-api-ac268a24-5f6b-4212-93f7-6cc0983e53ff no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:38:06.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5497" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":115,"skipped":1742,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:38:06.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-61a409ba-4827-485a-a5f8-379632e0d181 in namespace container-probe-9469 Apr 22 21:38:10.249: INFO: Started pod test-webserver-61a409ba-4827-485a-a5f8-379632e0d181 in namespace container-probe-9469 STEP: checking the pod's current state and verifying that restartCount is present Apr 22 21:38:10.252: INFO: Initial restart count of pod test-webserver-61a409ba-4827-485a-a5f8-379632e0d181 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:42:11.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9469" for this suite. • [SLOW TEST:245.093 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":1788,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:42:11.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 22 21:42:11.517: INFO: Waiting up to 5m0s for pod "downwardapi-volume-14551d1b-6561-4294-bd40-2d56baae4ea9" in namespace "downward-api-7096" to be "success or failure" Apr 22 21:42:11.522: INFO: Pod "downwardapi-volume-14551d1b-6561-4294-bd40-2d56baae4ea9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.749453ms Apr 22 21:42:13.537: INFO: Pod "downwardapi-volume-14551d1b-6561-4294-bd40-2d56baae4ea9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019793601s Apr 22 21:42:15.541: INFO: Pod "downwardapi-volume-14551d1b-6561-4294-bd40-2d56baae4ea9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023773278s STEP: Saw pod success Apr 22 21:42:15.541: INFO: Pod "downwardapi-volume-14551d1b-6561-4294-bd40-2d56baae4ea9" satisfied condition "success or failure" Apr 22 21:42:15.544: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-14551d1b-6561-4294-bd40-2d56baae4ea9 container client-container: STEP: delete the pod Apr 22 21:42:15.600: INFO: Waiting for pod downwardapi-volume-14551d1b-6561-4294-bd40-2d56baae4ea9 to disappear Apr 22 21:42:15.663: INFO: Pod downwardapi-volume-14551d1b-6561-4294-bd40-2d56baae4ea9 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:42:15.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7096" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":117,"skipped":1803,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:42:15.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-4ead093c-37ba-4a1b-88cb-74c45d71284f STEP: Creating a pod to test consume secrets Apr 22 21:42:15.746: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e21ff076-4f34-4c37-8a08-8385a715d964" in namespace "projected-9271" to be "success or failure" Apr 22 21:42:15.749: INFO: Pod "pod-projected-secrets-e21ff076-4f34-4c37-8a08-8385a715d964": Phase="Pending", Reason="", readiness=false. Elapsed: 3.153161ms Apr 22 21:42:17.752: INFO: Pod "pod-projected-secrets-e21ff076-4f34-4c37-8a08-8385a715d964": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006034349s Apr 22 21:42:19.757: INFO: Pod "pod-projected-secrets-e21ff076-4f34-4c37-8a08-8385a715d964": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011186866s STEP: Saw pod success Apr 22 21:42:19.757: INFO: Pod "pod-projected-secrets-e21ff076-4f34-4c37-8a08-8385a715d964" satisfied condition "success or failure" Apr 22 21:42:19.760: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-e21ff076-4f34-4c37-8a08-8385a715d964 container projected-secret-volume-test: STEP: delete the pod Apr 22 21:42:19.774: INFO: Waiting for pod pod-projected-secrets-e21ff076-4f34-4c37-8a08-8385a715d964 to disappear Apr 22 21:42:19.779: INFO: Pod pod-projected-secrets-e21ff076-4f34-4c37-8a08-8385a715d964 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:42:19.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9271" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":118,"skipped":1822,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:42:19.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 22 21:42:19.870: INFO: (0) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 5.755039ms) Apr 22 21:42:19.874: INFO: (1) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.648714ms) Apr 22 21:42:19.877: INFO: (2) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.560485ms) Apr 22 21:42:19.881: INFO: (3) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.637369ms) Apr 22 21:42:19.884: INFO: (4) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.303168ms) Apr 22 21:42:19.888: INFO: (5) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.534855ms) Apr 22 21:42:19.891: INFO: (6) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.552335ms) Apr 22 21:42:19.894: INFO: (7) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.963432ms) Apr 22 21:42:19.897: INFO: (8) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.747908ms) Apr 22 21:42:19.900: INFO: (9) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.875861ms) Apr 22 21:42:19.903: INFO: (10) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.085853ms) Apr 22 21:42:19.906: INFO: (11) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.260675ms) Apr 22 21:42:19.927: INFO: (12) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 20.767754ms) Apr 22 21:42:19.931: INFO: (13) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 4.009374ms) Apr 22 21:42:19.935: INFO: (14) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 4.076962ms) Apr 22 21:42:19.939: INFO: (15) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 4.000869ms) Apr 22 21:42:19.943: INFO: (16) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.842299ms) Apr 22 21:42:19.947: INFO: (17) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.519538ms) Apr 22 21:42:19.951: INFO: (18) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.8332ms) Apr 22 21:42:19.954: INFO: (19) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.44044ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:42:19.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-9730" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":119,"skipped":1865,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:42:19.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 22 21:42:21.450: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 22 21:42:23.526: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723188541, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723188541, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723188541, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723188541, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 22 21:42:26.579: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:42:38.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8160" for this suite. STEP: Destroying namespace "webhook-8160-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.829 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":120,"skipped":1900,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:42:38.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 22 21:42:39.559: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 22 21:42:41.570: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723188559, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723188559, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723188559, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723188559, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 22 21:42:44.601: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Apr 22 21:42:45.116: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:42:45.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1054" for this suite. STEP: Destroying namespace "webhook-1054-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.825 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":121,"skipped":1922,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:42:45.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-b41ac976-5ba6-4a86-b77f-7b52f2c083d8 STEP: Creating a pod to test consume secrets Apr 22 21:42:45.739: INFO: Waiting up to 5m0s for pod "pod-secrets-56ade29a-5be5-443c-bffc-1e17c84188c7" in namespace "secrets-3229" to be "success or failure" Apr 22 21:42:46.059: INFO: Pod "pod-secrets-56ade29a-5be5-443c-bffc-1e17c84188c7": Phase="Pending", Reason="", readiness=false. Elapsed: 319.948494ms Apr 22 21:42:48.070: INFO: Pod "pod-secrets-56ade29a-5be5-443c-bffc-1e17c84188c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.330775079s Apr 22 21:42:50.075: INFO: Pod "pod-secrets-56ade29a-5be5-443c-bffc-1e17c84188c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.335057844s STEP: Saw pod success Apr 22 21:42:50.075: INFO: Pod "pod-secrets-56ade29a-5be5-443c-bffc-1e17c84188c7" satisfied condition "success or failure" Apr 22 21:42:50.078: INFO: Trying to get logs from node jerma-worker pod pod-secrets-56ade29a-5be5-443c-bffc-1e17c84188c7 container secret-volume-test: STEP: delete the pod Apr 22 21:42:50.115: INFO: Waiting for pod pod-secrets-56ade29a-5be5-443c-bffc-1e17c84188c7 to disappear Apr 22 21:42:50.119: INFO: Pod pod-secrets-56ade29a-5be5-443c-bffc-1e17c84188c7 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:42:50.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3229" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":122,"skipped":1965,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:42:50.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 22 21:42:50.180: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:42:50.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2818" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":123,"skipped":1977,"failed":0} SSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:42:50.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Apr 22 21:43:01.003: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3434 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 22 21:43:01.003: INFO: >>> kubeConfig: /root/.kube/config I0422 21:43:01.039863 6 log.go:172] (0xc0017dabb0) (0xc0023aa140) Create stream I0422 21:43:01.039900 6 log.go:172] (0xc0017dabb0) (0xc0023aa140) Stream added, broadcasting: 1 I0422 21:43:01.042376 6 log.go:172] (0xc0017dabb0) Reply frame received for 1 I0422 21:43:01.042425 6 log.go:172] (0xc0017dabb0) (0xc00230a0a0) Create stream I0422 21:43:01.042443 6 log.go:172] (0xc0017dabb0) (0xc00230a0a0) Stream added, broadcasting: 3 I0422 21:43:01.043400 6 log.go:172] (0xc0017dabb0) Reply frame received for 3 I0422 21:43:01.043418 6 log.go:172] (0xc0017dabb0) (0xc00230a1e0) Create stream I0422 21:43:01.043430 6 log.go:172] (0xc0017dabb0) (0xc00230a1e0) Stream added, broadcasting: 5 I0422 21:43:01.044271 6 log.go:172] (0xc0017dabb0) Reply frame received for 5 I0422 21:43:01.127496 6 log.go:172] (0xc0017dabb0) Data frame received for 5 I0422 21:43:01.127608 6 log.go:172] (0xc00230a1e0) (5) Data frame handling I0422 21:43:01.127652 6 log.go:172] (0xc0017dabb0) Data frame received for 3 I0422 21:43:01.127675 6 log.go:172] (0xc00230a0a0) (3) Data frame handling I0422 21:43:01.127693 6 log.go:172] (0xc00230a0a0) (3) Data frame sent I0422 21:43:01.127705 6 log.go:172] (0xc0017dabb0) Data frame received for 3 I0422 21:43:01.127714 6 log.go:172] (0xc00230a0a0) (3) Data frame handling I0422 21:43:01.129293 6 log.go:172] (0xc0017dabb0) Data frame received for 1 I0422 21:43:01.129332 6 log.go:172] (0xc0023aa140) (1) Data frame handling I0422 21:43:01.129360 6 log.go:172] (0xc0023aa140) (1) Data frame sent I0422 21:43:01.129394 6 log.go:172] (0xc0017dabb0) (0xc0023aa140) Stream removed, broadcasting: 1 I0422 21:43:01.129417 6 log.go:172] (0xc0017dabb0) Go away received I0422 21:43:01.129569 6 log.go:172] (0xc0017dabb0) (0xc0023aa140) Stream removed, broadcasting: 1 I0422 21:43:01.129601 6 log.go:172] (0xc0017dabb0) (0xc00230a0a0) Stream removed, broadcasting: 3 I0422 21:43:01.129614 6 log.go:172] (0xc0017dabb0) (0xc00230a1e0) Stream removed, broadcasting: 5 Apr 22 21:43:01.129: INFO: Exec stderr: "" Apr 22 21:43:01.129: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3434 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 22 21:43:01.129: INFO: >>> kubeConfig: /root/.kube/config I0422 21:43:01.162313 6 log.go:172] (0xc0011c7ef0) (0xc001aae320) Create stream I0422 21:43:01.162341 6 log.go:172] (0xc0011c7ef0) (0xc001aae320) Stream added, broadcasting: 1 I0422 21:43:01.168419 6 log.go:172] (0xc0011c7ef0) Reply frame received for 1 I0422 21:43:01.168530 6 log.go:172] (0xc0011c7ef0) (0xc001d26b40) Create stream I0422 21:43:01.168594 6 log.go:172] (0xc0011c7ef0) (0xc001d26b40) Stream added, broadcasting: 3 I0422 21:43:01.170863 6 log.go:172] (0xc0011c7ef0) Reply frame received for 3 I0422 21:43:01.170942 6 log.go:172] (0xc0011c7ef0) (0xc00230a320) Create stream I0422 21:43:01.170967 6 log.go:172] (0xc0011c7ef0) (0xc00230a320) Stream added, broadcasting: 5 I0422 21:43:01.173918 6 log.go:172] (0xc0011c7ef0) Reply frame received for 5 I0422 21:43:01.232218 6 log.go:172] (0xc0011c7ef0) Data frame received for 3 I0422 21:43:01.232261 6 log.go:172] (0xc001d26b40) (3) Data frame handling I0422 21:43:01.232271 6 log.go:172] (0xc001d26b40) (3) Data frame sent I0422 21:43:01.232279 6 log.go:172] (0xc0011c7ef0) Data frame received for 3 I0422 21:43:01.232286 6 log.go:172] (0xc001d26b40) (3) Data frame handling I0422 21:43:01.232305 6 log.go:172] (0xc0011c7ef0) Data frame received for 5 I0422 21:43:01.232313 6 log.go:172] (0xc00230a320) (5) Data frame handling I0422 21:43:01.233533 6 log.go:172] (0xc0011c7ef0) Data frame received for 1 I0422 21:43:01.233562 6 log.go:172] (0xc001aae320) (1) Data frame handling I0422 21:43:01.233572 6 log.go:172] (0xc001aae320) (1) Data frame sent I0422 21:43:01.233585 6 log.go:172] (0xc0011c7ef0) (0xc001aae320) Stream removed, broadcasting: 1 I0422 21:43:01.233612 6 log.go:172] (0xc0011c7ef0) Go away received I0422 21:43:01.233684 6 log.go:172] (0xc0011c7ef0) (0xc001aae320) Stream removed, broadcasting: 1 I0422 21:43:01.233698 6 log.go:172] (0xc0011c7ef0) (0xc001d26b40) Stream removed, broadcasting: 3 I0422 21:43:01.233705 6 log.go:172] (0xc0011c7ef0) (0xc00230a320) Stream removed, broadcasting: 5 Apr 22 21:43:01.233: INFO: Exec stderr: "" Apr 22 21:43:01.233: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3434 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 22 21:43:01.233: INFO: >>> kubeConfig: /root/.kube/config I0422 21:43:01.257404 6 log.go:172] (0xc00158cc60) (0xc001d26fa0) Create stream I0422 21:43:01.257441 6 log.go:172] (0xc00158cc60) (0xc001d26fa0) Stream added, broadcasting: 1 I0422 21:43:01.259738 6 log.go:172] (0xc00158cc60) Reply frame received for 1 I0422 21:43:01.259780 6 log.go:172] (0xc00158cc60) (0xc001d27180) Create stream I0422 21:43:01.259797 6 log.go:172] (0xc00158cc60) (0xc001d27180) Stream added, broadcasting: 3 I0422 21:43:01.260938 6 log.go:172] (0xc00158cc60) Reply frame received for 3 I0422 21:43:01.260971 6 log.go:172] (0xc00158cc60) (0xc001aae3c0) Create stream I0422 21:43:01.260984 6 log.go:172] (0xc00158cc60) (0xc001aae3c0) Stream added, broadcasting: 5 I0422 21:43:01.262047 6 log.go:172] (0xc00158cc60) Reply frame received for 5 I0422 21:43:01.315112 6 log.go:172] (0xc00158cc60) Data frame received for 5 I0422 21:43:01.315133 6 log.go:172] (0xc001aae3c0) (5) Data frame handling I0422 21:43:01.315147 6 log.go:172] (0xc00158cc60) Data frame received for 3 I0422 21:43:01.315152 6 log.go:172] (0xc001d27180) (3) Data frame handling I0422 21:43:01.315164 6 log.go:172] (0xc001d27180) (3) Data frame sent I0422 21:43:01.315171 6 log.go:172] (0xc00158cc60) Data frame received for 3 I0422 21:43:01.315182 6 log.go:172] (0xc001d27180) (3) Data frame handling I0422 21:43:01.316833 6 log.go:172] (0xc00158cc60) Data frame received for 1 I0422 21:43:01.316864 6 log.go:172] (0xc001d26fa0) (1) Data frame handling I0422 21:43:01.316883 6 log.go:172] (0xc001d26fa0) (1) Data frame sent I0422 21:43:01.316900 6 log.go:172] (0xc00158cc60) (0xc001d26fa0) Stream removed, broadcasting: 1 I0422 21:43:01.316914 6 log.go:172] (0xc00158cc60) Go away received I0422 21:43:01.317064 6 log.go:172] (0xc00158cc60) (0xc001d26fa0) Stream removed, broadcasting: 1 I0422 21:43:01.317088 6 log.go:172] (0xc00158cc60) (0xc001d27180) Stream removed, broadcasting: 3 I0422 21:43:01.317345 6 log.go:172] (0xc00158cc60) (0xc001aae3c0) Stream removed, broadcasting: 5 Apr 22 21:43:01.317: INFO: Exec stderr: "" Apr 22 21:43:01.317: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3434 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 22 21:43:01.317: INFO: >>> kubeConfig: /root/.kube/config I0422 21:43:01.352422 6 log.go:172] (0xc00158d340) (0xc001d27540) Create stream I0422 21:43:01.352456 6 log.go:172] (0xc00158d340) (0xc001d27540) Stream added, broadcasting: 1 I0422 21:43:01.355351 6 log.go:172] (0xc00158d340) Reply frame received for 1 I0422 21:43:01.355397 6 log.go:172] (0xc00158d340) (0xc00230a3c0) Create stream I0422 21:43:01.355413 6 log.go:172] (0xc00158d340) (0xc00230a3c0) Stream added, broadcasting: 3 I0422 21:43:01.356558 6 log.go:172] (0xc00158d340) Reply frame received for 3 I0422 21:43:01.356598 6 log.go:172] (0xc00158d340) (0xc001aa6e60) Create stream I0422 21:43:01.356615 6 log.go:172] (0xc00158d340) (0xc001aa6e60) Stream added, broadcasting: 5 I0422 21:43:01.358109 6 log.go:172] (0xc00158d340) Reply frame received for 5 I0422 21:43:01.417701 6 log.go:172] (0xc00158d340) Data frame received for 3 I0422 21:43:01.417748 6 log.go:172] (0xc00230a3c0) (3) Data frame handling I0422 21:43:01.417784 6 log.go:172] (0xc00230a3c0) (3) Data frame sent I0422 21:43:01.417820 6 log.go:172] (0xc00158d340) Data frame received for 3 I0422 21:43:01.417851 6 log.go:172] (0xc00230a3c0) (3) Data frame handling I0422 21:43:01.417933 6 log.go:172] (0xc00158d340) Data frame received for 5 I0422 21:43:01.417968 6 log.go:172] (0xc001aa6e60) (5) Data frame handling I0422 21:43:01.419585 6 log.go:172] (0xc00158d340) Data frame received for 1 I0422 21:43:01.419601 6 log.go:172] (0xc001d27540) (1) Data frame handling I0422 21:43:01.419609 6 log.go:172] (0xc001d27540) (1) Data frame sent I0422 21:43:01.419618 6 log.go:172] (0xc00158d340) (0xc001d27540) Stream removed, broadcasting: 1 I0422 21:43:01.419806 6 log.go:172] (0xc00158d340) (0xc001d27540) Stream removed, broadcasting: 1 I0422 21:43:01.419865 6 log.go:172] (0xc00158d340) (0xc00230a3c0) Stream removed, broadcasting: 3 I0422 21:43:01.419925 6 log.go:172] (0xc00158d340) Go away received I0422 21:43:01.420093 6 log.go:172] (0xc00158d340) (0xc001aa6e60) Stream removed, broadcasting: 5 Apr 22 21:43:01.420: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Apr 22 21:43:01.420: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3434 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 22 21:43:01.420: INFO: >>> kubeConfig: /root/.kube/config I0422 21:43:01.451676 6 log.go:172] (0xc00122c2c0) (0xc00230a780) Create stream I0422 21:43:01.451700 6 log.go:172] (0xc00122c2c0) (0xc00230a780) Stream added, broadcasting: 1 I0422 21:43:01.454998 6 log.go:172] (0xc00122c2c0) Reply frame received for 1 I0422 21:43:01.455032 6 log.go:172] (0xc00122c2c0) (0xc001d27680) Create stream I0422 21:43:01.455043 6 log.go:172] (0xc00122c2c0) (0xc001d27680) Stream added, broadcasting: 3 I0422 21:43:01.456253 6 log.go:172] (0xc00122c2c0) Reply frame received for 3 I0422 21:43:01.456280 6 log.go:172] (0xc00122c2c0) (0xc0023aa280) Create stream I0422 21:43:01.456289 6 log.go:172] (0xc00122c2c0) (0xc0023aa280) Stream added, broadcasting: 5 I0422 21:43:01.457544 6 log.go:172] (0xc00122c2c0) Reply frame received for 5 I0422 21:43:01.530601 6 log.go:172] (0xc00122c2c0) Data frame received for 5 I0422 21:43:01.530634 6 log.go:172] (0xc0023aa280) (5) Data frame handling I0422 21:43:01.530670 6 log.go:172] (0xc00122c2c0) Data frame received for 3 I0422 21:43:01.530684 6 log.go:172] (0xc001d27680) (3) Data frame handling I0422 21:43:01.530696 6 log.go:172] (0xc001d27680) (3) Data frame sent I0422 21:43:01.530713 6 log.go:172] (0xc00122c2c0) Data frame received for 3 I0422 21:43:01.530722 6 log.go:172] (0xc001d27680) (3) Data frame handling I0422 21:43:01.532141 6 log.go:172] (0xc00122c2c0) Data frame received for 1 I0422 21:43:01.532164 6 log.go:172] (0xc00230a780) (1) Data frame handling I0422 21:43:01.532180 6 log.go:172] (0xc00230a780) (1) Data frame sent I0422 21:43:01.532210 6 log.go:172] (0xc00122c2c0) (0xc00230a780) Stream removed, broadcasting: 1 I0422 21:43:01.532350 6 log.go:172] (0xc00122c2c0) (0xc00230a780) Stream removed, broadcasting: 1 I0422 21:43:01.532375 6 log.go:172] (0xc00122c2c0) (0xc001d27680) Stream removed, broadcasting: 3 I0422 21:43:01.532394 6 log.go:172] (0xc00122c2c0) (0xc0023aa280) Stream removed, broadcasting: 5 Apr 22 21:43:01.532: INFO: Exec stderr: "" Apr 22 21:43:01.532: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3434 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 22 21:43:01.532: INFO: >>> kubeConfig: /root/.kube/config I0422 21:43:01.534690 6 log.go:172] (0xc00122c2c0) Go away received I0422 21:43:01.562818 6 log.go:172] (0xc0017daf20) (0xc0023aa500) Create stream I0422 21:43:01.562845 6 log.go:172] (0xc0017daf20) (0xc0023aa500) Stream added, broadcasting: 1 I0422 21:43:01.565560 6 log.go:172] (0xc0017daf20) Reply frame received for 1 I0422 21:43:01.565607 6 log.go:172] (0xc0017daf20) (0xc0023aa5a0) Create stream I0422 21:43:01.565622 6 log.go:172] (0xc0017daf20) (0xc0023aa5a0) Stream added, broadcasting: 3 I0422 21:43:01.566402 6 log.go:172] (0xc0017daf20) Reply frame received for 3 I0422 21:43:01.566426 6 log.go:172] (0xc0017daf20) (0xc001aae460) Create stream I0422 21:43:01.566437 6 log.go:172] (0xc0017daf20) (0xc001aae460) Stream added, broadcasting: 5 I0422 21:43:01.567225 6 log.go:172] (0xc0017daf20) Reply frame received for 5 I0422 21:43:01.625458 6 log.go:172] (0xc0017daf20) Data frame received for 3 I0422 21:43:01.625499 6 log.go:172] (0xc0023aa5a0) (3) Data frame handling I0422 21:43:01.625525 6 log.go:172] (0xc0023aa5a0) (3) Data frame sent I0422 21:43:01.625651 6 log.go:172] (0xc0017daf20) Data frame received for 3 I0422 21:43:01.625702 6 log.go:172] (0xc0023aa5a0) (3) Data frame handling I0422 21:43:01.625753 6 log.go:172] (0xc0017daf20) Data frame received for 5 I0422 21:43:01.625773 6 log.go:172] (0xc001aae460) (5) Data frame handling I0422 21:43:01.627644 6 log.go:172] (0xc0017daf20) Data frame received for 1 I0422 21:43:01.627656 6 log.go:172] (0xc0023aa500) (1) Data frame handling I0422 21:43:01.627666 6 log.go:172] (0xc0023aa500) (1) Data frame sent I0422 21:43:01.627673 6 log.go:172] (0xc0017daf20) (0xc0023aa500) Stream removed, broadcasting: 1 I0422 21:43:01.627765 6 log.go:172] (0xc0017daf20) (0xc0023aa500) Stream removed, broadcasting: 1 I0422 21:43:01.627782 6 log.go:172] (0xc0017daf20) (0xc0023aa5a0) Stream removed, broadcasting: 3 I0422 21:43:01.627909 6 log.go:172] (0xc0017daf20) (0xc001aae460) Stream removed, broadcasting: 5 I0422 21:43:01.627965 6 log.go:172] (0xc0017daf20) Go away received Apr 22 21:43:01.628: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Apr 22 21:43:01.628: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3434 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 22 21:43:01.628: INFO: >>> kubeConfig: /root/.kube/config I0422 21:43:01.660684 6 log.go:172] (0xc0017db550) (0xc0023aadc0) Create stream I0422 21:43:01.660704 6 log.go:172] (0xc0017db550) (0xc0023aadc0) Stream added, broadcasting: 1 I0422 21:43:01.663253 6 log.go:172] (0xc0017db550) Reply frame received for 1 I0422 21:43:01.663309 6 log.go:172] (0xc0017db550) (0xc0023ab2c0) Create stream I0422 21:43:01.663331 6 log.go:172] (0xc0017db550) (0xc0023ab2c0) Stream added, broadcasting: 3 I0422 21:43:01.664415 6 log.go:172] (0xc0017db550) Reply frame received for 3 I0422 21:43:01.664437 6 log.go:172] (0xc0017db550) (0xc001aa6fa0) Create stream I0422 21:43:01.664448 6 log.go:172] (0xc0017db550) (0xc001aa6fa0) Stream added, broadcasting: 5 I0422 21:43:01.665712 6 log.go:172] (0xc0017db550) Reply frame received for 5 I0422 21:43:01.723519 6 log.go:172] (0xc0017db550) Data frame received for 5 I0422 21:43:01.723554 6 log.go:172] (0xc001aa6fa0) (5) Data frame handling I0422 21:43:01.723589 6 log.go:172] (0xc0017db550) Data frame received for 3 I0422 21:43:01.723605 6 log.go:172] (0xc0023ab2c0) (3) Data frame handling I0422 21:43:01.723619 6 log.go:172] (0xc0023ab2c0) (3) Data frame sent I0422 21:43:01.723633 6 log.go:172] (0xc0017db550) Data frame received for 3 I0422 21:43:01.723648 6 log.go:172] (0xc0023ab2c0) (3) Data frame handling I0422 21:43:01.725789 6 log.go:172] (0xc0017db550) Data frame received for 1 I0422 21:43:01.725843 6 log.go:172] (0xc0023aadc0) (1) Data frame handling I0422 21:43:01.725878 6 log.go:172] (0xc0023aadc0) (1) Data frame sent I0422 21:43:01.725901 6 log.go:172] (0xc0017db550) (0xc0023aadc0) Stream removed, broadcasting: 1 I0422 21:43:01.725925 6 log.go:172] (0xc0017db550) Go away received I0422 21:43:01.726045 6 log.go:172] (0xc0017db550) (0xc0023aadc0) Stream removed, broadcasting: 1 I0422 21:43:01.726061 6 log.go:172] (0xc0017db550) (0xc0023ab2c0) Stream removed, broadcasting: 3 I0422 21:43:01.726071 6 log.go:172] (0xc0017db550) (0xc001aa6fa0) Stream removed, broadcasting: 5 Apr 22 21:43:01.726: INFO: Exec stderr: "" Apr 22 21:43:01.726: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3434 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 22 21:43:01.726: INFO: >>> kubeConfig: /root/.kube/config I0422 21:43:01.758444 6 log.go:172] (0xc0017dbb80) (0xc0023abb80) Create stream I0422 21:43:01.758469 6 log.go:172] (0xc0017dbb80) (0xc0023abb80) Stream added, broadcasting: 1 I0422 21:43:01.760850 6 log.go:172] (0xc0017dbb80) Reply frame received for 1 I0422 21:43:01.760892 6 log.go:172] (0xc0017dbb80) (0xc001d27a40) Create stream I0422 21:43:01.760908 6 log.go:172] (0xc0017dbb80) (0xc001d27a40) Stream added, broadcasting: 3 I0422 21:43:01.762256 6 log.go:172] (0xc0017dbb80) Reply frame received for 3 I0422 21:43:01.762301 6 log.go:172] (0xc0017dbb80) (0xc0023abc20) Create stream I0422 21:43:01.762321 6 log.go:172] (0xc0017dbb80) (0xc0023abc20) Stream added, broadcasting: 5 I0422 21:43:01.763407 6 log.go:172] (0xc0017dbb80) Reply frame received for 5 I0422 21:43:01.817602 6 log.go:172] (0xc0017dbb80) Data frame received for 5 I0422 21:43:01.817654 6 log.go:172] (0xc0023abc20) (5) Data frame handling I0422 21:43:01.817690 6 log.go:172] (0xc0017dbb80) Data frame received for 3 I0422 21:43:01.817710 6 log.go:172] (0xc001d27a40) (3) Data frame handling I0422 21:43:01.817737 6 log.go:172] (0xc001d27a40) (3) Data frame sent I0422 21:43:01.817759 6 log.go:172] (0xc0017dbb80) Data frame received for 3 I0422 21:43:01.817778 6 log.go:172] (0xc001d27a40) (3) Data frame handling I0422 21:43:01.818983 6 log.go:172] (0xc0017dbb80) Data frame received for 1 I0422 21:43:01.819002 6 log.go:172] (0xc0023abb80) (1) Data frame handling I0422 21:43:01.819011 6 log.go:172] (0xc0023abb80) (1) Data frame sent I0422 21:43:01.819021 6 log.go:172] (0xc0017dbb80) (0xc0023abb80) Stream removed, broadcasting: 1 I0422 21:43:01.819095 6 log.go:172] (0xc0017dbb80) (0xc0023abb80) Stream removed, broadcasting: 1 I0422 21:43:01.819115 6 log.go:172] (0xc0017dbb80) (0xc001d27a40) Stream removed, broadcasting: 3 I0422 21:43:01.819228 6 log.go:172] (0xc0017dbb80) (0xc0023abc20) Stream removed, broadcasting: 5 Apr 22 21:43:01.819: INFO: Exec stderr: "" Apr 22 21:43:01.819: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3434 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 22 21:43:01.819: INFO: >>> kubeConfig: /root/.kube/config I0422 21:43:01.822280 6 log.go:172] (0xc0017dbb80) Go away received I0422 21:43:01.891081 6 log.go:172] (0xc00122c8f0) (0xc00230a960) Create stream I0422 21:43:01.891115 6 log.go:172] (0xc00122c8f0) (0xc00230a960) Stream added, broadcasting: 1 I0422 21:43:01.893639 6 log.go:172] (0xc00122c8f0) Reply frame received for 1 I0422 21:43:01.893677 6 log.go:172] (0xc00122c8f0) (0xc001aae5a0) Create stream I0422 21:43:01.893691 6 log.go:172] (0xc00122c8f0) (0xc001aae5a0) Stream added, broadcasting: 3 I0422 21:43:01.894802 6 log.go:172] (0xc00122c8f0) Reply frame received for 3 I0422 21:43:01.894845 6 log.go:172] (0xc00122c8f0) (0xc0023abcc0) Create stream I0422 21:43:01.894861 6 log.go:172] (0xc00122c8f0) (0xc0023abcc0) Stream added, broadcasting: 5 I0422 21:43:01.895851 6 log.go:172] (0xc00122c8f0) Reply frame received for 5 I0422 21:43:01.965862 6 log.go:172] (0xc00122c8f0) Data frame received for 3 I0422 21:43:01.965900 6 log.go:172] (0xc001aae5a0) (3) Data frame handling I0422 21:43:01.965912 6 log.go:172] (0xc001aae5a0) (3) Data frame sent I0422 21:43:01.965964 6 log.go:172] (0xc00122c8f0) Data frame received for 5 I0422 21:43:01.966002 6 log.go:172] (0xc0023abcc0) (5) Data frame handling I0422 21:43:01.966048 6 log.go:172] (0xc00122c8f0) Data frame received for 3 I0422 21:43:01.966065 6 log.go:172] (0xc001aae5a0) (3) Data frame handling I0422 21:43:01.967776 6 log.go:172] (0xc00122c8f0) Data frame received for 1 I0422 21:43:01.967808 6 log.go:172] (0xc00230a960) (1) Data frame handling I0422 21:43:01.967824 6 log.go:172] (0xc00230a960) (1) Data frame sent I0422 21:43:01.967837 6 log.go:172] (0xc00122c8f0) (0xc00230a960) Stream removed, broadcasting: 1 I0422 21:43:01.967938 6 log.go:172] (0xc00122c8f0) (0xc00230a960) Stream removed, broadcasting: 1 I0422 21:43:01.967959 6 log.go:172] (0xc00122c8f0) (0xc001aae5a0) Stream removed, broadcasting: 3 I0422 21:43:01.967980 6 log.go:172] (0xc00122c8f0) (0xc0023abcc0) Stream removed, broadcasting: 5 Apr 22 21:43:01.967: INFO: Exec stderr: "" I0422 21:43:01.968021 6 log.go:172] (0xc00122c8f0) Go away received Apr 22 21:43:01.968: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3434 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 22 21:43:01.968: INFO: >>> kubeConfig: /root/.kube/config I0422 21:43:02.010923 6 log.go:172] (0xc001eda210) (0xc001d981e0) Create stream I0422 21:43:02.010969 6 log.go:172] (0xc001eda210) (0xc001d981e0) Stream added, broadcasting: 1 I0422 21:43:02.013546 6 log.go:172] (0xc001eda210) Reply frame received for 1 I0422 21:43:02.013594 6 log.go:172] (0xc001eda210) (0xc00230aaa0) Create stream I0422 21:43:02.013611 6 log.go:172] (0xc001eda210) (0xc00230aaa0) Stream added, broadcasting: 3 I0422 21:43:02.014760 6 log.go:172] (0xc001eda210) Reply frame received for 3 I0422 21:43:02.014802 6 log.go:172] (0xc001eda210) (0xc001d27b80) Create stream I0422 21:43:02.014815 6 log.go:172] (0xc001eda210) (0xc001d27b80) Stream added, broadcasting: 5 I0422 21:43:02.015745 6 log.go:172] (0xc001eda210) Reply frame received for 5 I0422 21:43:02.078164 6 log.go:172] (0xc001eda210) Data frame received for 3 I0422 21:43:02.078199 6 log.go:172] (0xc00230aaa0) (3) Data frame handling I0422 21:43:02.078232 6 log.go:172] (0xc00230aaa0) (3) Data frame sent I0422 21:43:02.078247 6 log.go:172] (0xc001eda210) Data frame received for 3 I0422 21:43:02.078281 6 log.go:172] (0xc00230aaa0) (3) Data frame handling I0422 21:43:02.078325 6 log.go:172] (0xc001eda210) Data frame received for 5 I0422 21:43:02.078375 6 log.go:172] (0xc001d27b80) (5) Data frame handling I0422 21:43:02.079800 6 log.go:172] (0xc001eda210) Data frame received for 1 I0422 21:43:02.079865 6 log.go:172] (0xc001d981e0) (1) Data frame handling I0422 21:43:02.079893 6 log.go:172] (0xc001d981e0) (1) Data frame sent I0422 21:43:02.079905 6 log.go:172] (0xc001eda210) (0xc001d981e0) Stream removed, broadcasting: 1 I0422 21:43:02.079923 6 log.go:172] (0xc001eda210) Go away received I0422 21:43:02.080092 6 log.go:172] (0xc001eda210) (0xc001d981e0) Stream removed, broadcasting: 1 I0422 21:43:02.080130 6 log.go:172] (0xc001eda210) (0xc00230aaa0) Stream removed, broadcasting: 3 I0422 21:43:02.080166 6 log.go:172] (0xc001eda210) (0xc001d27b80) Stream removed, broadcasting: 5 Apr 22 21:43:02.080: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:43:02.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-3434" for this suite. • [SLOW TEST:11.207 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":124,"skipped":1982,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:43:02.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 22 21:43:02.166: INFO: Waiting up to 5m0s for pod "pod-91937cec-b2ec-47aa-b862-b357815a19fd" in namespace "emptydir-2307" to be "success or failure" Apr 22 21:43:02.170: INFO: Pod "pod-91937cec-b2ec-47aa-b862-b357815a19fd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.680799ms Apr 22 21:43:04.176: INFO: Pod "pod-91937cec-b2ec-47aa-b862-b357815a19fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010028711s Apr 22 21:43:06.182: INFO: Pod "pod-91937cec-b2ec-47aa-b862-b357815a19fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015819502s STEP: Saw pod success Apr 22 21:43:06.182: INFO: Pod "pod-91937cec-b2ec-47aa-b862-b357815a19fd" satisfied condition "success or failure" Apr 22 21:43:06.202: INFO: Trying to get logs from node jerma-worker pod pod-91937cec-b2ec-47aa-b862-b357815a19fd container test-container: STEP: delete the pod Apr 22 21:43:06.213: INFO: Waiting for pod pod-91937cec-b2ec-47aa-b862-b357815a19fd to disappear Apr 22 21:43:06.228: INFO: Pod pod-91937cec-b2ec-47aa-b862-b357815a19fd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:43:06.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2307" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":125,"skipped":1983,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:43:06.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Apr 22 21:43:12.368: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-8267 PodName:pod-sharedvolume-1409a53f-30d5-4c1f-945d-8767ff7a9320 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 22 21:43:12.368: INFO: >>> kubeConfig: /root/.kube/config I0422 21:43:12.394653 6 log.go:172] (0xc00158dd90) (0xc0023da460) Create stream I0422 21:43:12.394692 6 log.go:172] (0xc00158dd90) (0xc0023da460) Stream added, broadcasting: 1 I0422 21:43:12.397050 6 log.go:172] (0xc00158dd90) Reply frame received for 1 I0422 21:43:12.397082 6 log.go:172] (0xc00158dd90) (0xc001aae640) Create stream I0422 21:43:12.397096 6 log.go:172] (0xc00158dd90) (0xc001aae640) Stream added, broadcasting: 3 I0422 21:43:12.398394 6 log.go:172] (0xc00158dd90) Reply frame received for 3 I0422 21:43:12.398436 6 log.go:172] (0xc00158dd90) (0xc00228c5a0) Create stream I0422 21:43:12.398455 6 log.go:172] (0xc00158dd90) (0xc00228c5a0) Stream added, broadcasting: 5 I0422 21:43:12.399218 6 log.go:172] (0xc00158dd90) Reply frame received for 5 I0422 21:43:12.502557 6 log.go:172] (0xc00158dd90) Data frame received for 3 I0422 21:43:12.502592 6 log.go:172] (0xc001aae640) (3) Data frame handling I0422 21:43:12.502601 6 log.go:172] (0xc001aae640) (3) Data frame sent I0422 21:43:12.502608 6 log.go:172] (0xc00158dd90) Data frame received for 3 I0422 21:43:12.502617 6 log.go:172] (0xc001aae640) (3) Data frame handling I0422 21:43:12.502637 6 log.go:172] (0xc00158dd90) Data frame received for 5 I0422 21:43:12.502650 6 log.go:172] (0xc00228c5a0) (5) Data frame handling I0422 21:43:12.504210 6 log.go:172] (0xc00158dd90) Data frame received for 1 I0422 21:43:12.504255 6 log.go:172] (0xc0023da460) (1) Data frame handling I0422 21:43:12.504283 6 log.go:172] (0xc0023da460) (1) Data frame sent I0422 21:43:12.504309 6 log.go:172] (0xc00158dd90) (0xc0023da460) Stream removed, broadcasting: 1 I0422 21:43:12.504367 6 log.go:172] (0xc00158dd90) Go away received I0422 21:43:12.504498 6 log.go:172] (0xc00158dd90) (0xc0023da460) Stream removed, broadcasting: 1 I0422 21:43:12.504529 6 log.go:172] (0xc00158dd90) (0xc001aae640) Stream removed, broadcasting: 3 I0422 21:43:12.504550 6 log.go:172] (0xc00158dd90) (0xc00228c5a0) Stream removed, broadcasting: 5 Apr 22 21:43:12.504: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:43:12.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8267" for this suite. • [SLOW TEST:6.276 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":126,"skipped":2002,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:43:12.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info Apr 22 21:43:12.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Apr 22 21:43:15.071: INFO: stderr: "" Apr 22 21:43:15.071: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:43:15.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4239" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":127,"skipped":2019,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:43:15.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Apr 22 21:43:15.163: INFO: PodSpec: initContainers in spec.initContainers Apr 22 21:44:03.611: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-994370fd-fb44-464f-8755-59dac29475be", GenerateName:"", Namespace:"init-container-5557", SelfLink:"/api/v1/namespaces/init-container-5557/pods/pod-init-994370fd-fb44-464f-8755-59dac29475be", UID:"641a2372-3127-4e0b-899e-ffed32e73689", ResourceVersion:"10227346", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63723188595, loc:(*time.Location)(0x78ee080)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"163595749"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-6z2pn", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc007771280), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6z2pn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6z2pn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6z2pn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00417a3a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0028f00c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00417a430)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00417a450)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00417a458), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00417a45c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723188595, loc:(*time.Location)(0x78ee080)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723188595, loc:(*time.Location)(0x78ee080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723188595, loc:(*time.Location)(0x78ee080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723188595, loc:(*time.Location)(0x78ee080)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.10", PodIP:"10.244.1.68", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.68"}}, StartTime:(*v1.Time)(0xc0027e0d40), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0027e4620)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0027e4690)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://491ba901dcc5eb3b80c079fb539c8e6b77deaa0067e6d39c88828513bee0b1a5", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0027e0d80), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0027e0d60), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc00417a4df)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:44:03.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5557" for this suite. • [SLOW TEST:48.539 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":128,"skipped":2049,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:44:03.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-b7b8b73f-4ebd-4746-9303-9ec059d8ae9e STEP: Creating a pod to test consume secrets Apr 22 21:44:03.687: INFO: Waiting up to 5m0s for pod "pod-secrets-210989bd-8614-4870-a8e4-922d1c6e5519" in namespace "secrets-5015" to be "success or failure" Apr 22 21:44:03.691: INFO: Pod "pod-secrets-210989bd-8614-4870-a8e4-922d1c6e5519": Phase="Pending", Reason="", readiness=false. Elapsed: 4.17913ms Apr 22 21:44:05.695: INFO: Pod "pod-secrets-210989bd-8614-4870-a8e4-922d1c6e5519": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007659938s Apr 22 21:44:07.713: INFO: Pod "pod-secrets-210989bd-8614-4870-a8e4-922d1c6e5519": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025943256s STEP: Saw pod success Apr 22 21:44:07.713: INFO: Pod "pod-secrets-210989bd-8614-4870-a8e4-922d1c6e5519" satisfied condition "success or failure" Apr 22 21:44:07.716: INFO: Trying to get logs from node jerma-worker pod pod-secrets-210989bd-8614-4870-a8e4-922d1c6e5519 container secret-env-test: STEP: delete the pod Apr 22 21:44:07.795: INFO: Waiting for pod pod-secrets-210989bd-8614-4870-a8e4-922d1c6e5519 to disappear Apr 22 21:44:07.805: INFO: Pod pod-secrets-210989bd-8614-4870-a8e4-922d1c6e5519 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:44:07.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5015" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":129,"skipped":2080,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:44:07.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-172.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-172.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-172.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-172.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-172.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-172.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 22 21:44:13.976: INFO: DNS probes using dns-172/dns-test-1773000f-2b69-41a0-bf8f-841b3352f34f succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:44:14.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-172" for this suite. • [SLOW TEST:6.286 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":130,"skipped":2113,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:44:14.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1681 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 22 21:44:14.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-9105' Apr 22 21:44:14.402: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 22 21:44:14.402: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1686 Apr 22 21:44:14.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-9105' Apr 22 21:44:14.583: INFO: stderr: "" Apr 22 21:44:14.583: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:44:14.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9105" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":131,"skipped":2122,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:44:14.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Apr 22 21:44:14.634: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:44:23.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8348" for this suite. • [SLOW TEST:9.032 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":132,"skipped":2153,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:44:23.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 22 21:44:23.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Apr 22 21:44:24.310: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-22T21:44:24Z generation:1 name:name1 resourceVersion:10227540 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:a651441c-fa37-4782-a8fb-48b56baa2554] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Apr 22 21:44:34.315: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-22T21:44:34Z generation:1 name:name2 resourceVersion:10227589 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:8a2e0005-f782-4111-a6fa-aaf080fc5881] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Apr 22 21:44:44.321: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-22T21:44:24Z generation:2 name:name1 resourceVersion:10227621 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:a651441c-fa37-4782-a8fb-48b56baa2554] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Apr 22 21:44:54.328: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-22T21:44:34Z generation:2 name:name2 resourceVersion:10227651 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:8a2e0005-f782-4111-a6fa-aaf080fc5881] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Apr 22 21:45:04.336: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-22T21:44:24Z generation:2 name:name1 resourceVersion:10227681 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:a651441c-fa37-4782-a8fb-48b56baa2554] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Apr 22 21:45:14.345: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-22T21:44:34Z generation:2 name:name2 resourceVersion:10227711 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:8a2e0005-f782-4111-a6fa-aaf080fc5881] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:45:25.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-7700" for this suite. • [SLOW TEST:61.771 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":133,"skipped":2156,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:45:25.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:45:29.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-329" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":134,"skipped":2165,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:45:29.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-f9522acf-9d0d-435f-97ec-d758565a0181 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:45:29.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5439" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":135,"skipped":2198,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:45:29.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 22 21:45:29.761: INFO: Waiting up to 5m0s for pod "pod-12e5312d-c98d-43c3-96b3-24eb216fe207" in namespace "emptydir-3193" to be "success or failure" Apr 22 21:45:29.769: INFO: Pod "pod-12e5312d-c98d-43c3-96b3-24eb216fe207": Phase="Pending", Reason="", readiness=false. Elapsed: 7.662619ms Apr 22 21:45:31.773: INFO: Pod "pod-12e5312d-c98d-43c3-96b3-24eb216fe207": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01198743s Apr 22 21:45:33.780: INFO: Pod "pod-12e5312d-c98d-43c3-96b3-24eb216fe207": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018194488s STEP: Saw pod success Apr 22 21:45:33.780: INFO: Pod "pod-12e5312d-c98d-43c3-96b3-24eb216fe207" satisfied condition "success or failure" Apr 22 21:45:33.782: INFO: Trying to get logs from node jerma-worker pod pod-12e5312d-c98d-43c3-96b3-24eb216fe207 container test-container: STEP: delete the pod Apr 22 21:45:33.794: INFO: Waiting for pod pod-12e5312d-c98d-43c3-96b3-24eb216fe207 to disappear Apr 22 21:45:33.799: INFO: Pod pod-12e5312d-c98d-43c3-96b3-24eb216fe207 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:45:33.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3193" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2201,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:45:33.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 22 21:45:34.587: INFO: Pod name wrapped-volume-race-f563c762-e29c-4644-97ac-81a8b058c08b: Found 0 pods out of 5 Apr 22 21:45:39.596: INFO: Pod name wrapped-volume-race-f563c762-e29c-4644-97ac-81a8b058c08b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f563c762-e29c-4644-97ac-81a8b058c08b in namespace emptydir-wrapper-2998, will wait for the garbage collector to delete the pods Apr 22 21:45:53.689: INFO: Deleting ReplicationController wrapped-volume-race-f563c762-e29c-4644-97ac-81a8b058c08b took: 7.366643ms Apr 22 21:45:53.990: INFO: Terminating ReplicationController wrapped-volume-race-f563c762-e29c-4644-97ac-81a8b058c08b pods took: 300.263827ms STEP: Creating RC which spawns configmap-volume pods Apr 22 21:46:00.923: INFO: Pod name wrapped-volume-race-1f74365a-ef95-44de-8892-e0ada6804839: Found 0 pods out of 5 Apr 22 21:46:05.961: INFO: Pod name wrapped-volume-race-1f74365a-ef95-44de-8892-e0ada6804839: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-1f74365a-ef95-44de-8892-e0ada6804839 in namespace emptydir-wrapper-2998, will wait for the garbage collector to delete the pods Apr 22 21:46:18.550: INFO: Deleting ReplicationController wrapped-volume-race-1f74365a-ef95-44de-8892-e0ada6804839 took: 7.746185ms Apr 22 21:46:18.951: INFO: Terminating ReplicationController wrapped-volume-race-1f74365a-ef95-44de-8892-e0ada6804839 pods took: 400.274155ms STEP: Creating RC which spawns configmap-volume pods Apr 22 21:46:29.704: INFO: Pod name wrapped-volume-race-8b294f52-0844-4144-8d57-2812c87fa0cc: Found 0 pods out of 5 Apr 22 21:46:34.712: INFO: Pod name wrapped-volume-race-8b294f52-0844-4144-8d57-2812c87fa0cc: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-8b294f52-0844-4144-8d57-2812c87fa0cc in namespace emptydir-wrapper-2998, will wait for the garbage collector to delete the pods Apr 22 21:46:50.821: INFO: Deleting ReplicationController wrapped-volume-race-8b294f52-0844-4144-8d57-2812c87fa0cc took: 7.482936ms Apr 22 21:46:51.222: INFO: Terminating ReplicationController wrapped-volume-race-8b294f52-0844-4144-8d57-2812c87fa0cc pods took: 400.293737ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:46:59.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2998" for this suite. • [SLOW TEST:86.081 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":137,"skipped":2204,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:46:59.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 22 21:46:59.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-882' Apr 22 21:47:00.278: INFO: stderr: "" Apr 22 21:47:00.278: INFO: stdout: "replicationcontroller/agnhost-master created\n" Apr 22 21:47:00.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-882' Apr 22 21:47:00.537: INFO: stderr: "" Apr 22 21:47:00.537: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 22 21:47:01.542: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 21:47:01.542: INFO: Found 0 / 1 Apr 22 21:47:02.542: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 21:47:02.542: INFO: Found 0 / 1 Apr 22 21:47:03.542: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 21:47:03.542: INFO: Found 1 / 1 Apr 22 21:47:03.542: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 22 21:47:03.545: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 21:47:03.545: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 22 21:47:03.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-pgx4x --namespace=kubectl-882' Apr 22 21:47:03.653: INFO: stderr: "" Apr 22 21:47:03.653: INFO: stdout: "Name: agnhost-master-pgx4x\nNamespace: kubectl-882\nPriority: 0\nNode: jerma-worker2/172.17.0.8\nStart Time: Wed, 22 Apr 2020 21:47:00 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.196\nIPs:\n IP: 10.244.2.196\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://c2c4e88f4710cb6fe00f4abf08d5ca8f4ea7373f8356e9f9975f1ea808fb4706\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 22 Apr 2020 21:47:02 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-hhjw4 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-hhjw4:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-hhjw4\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-882/agnhost-master-pgx4x to jerma-worker2\n Normal Pulled 2s kubelet, jerma-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 1s kubelet, jerma-worker2 Created container agnhost-master\n Normal Started 1s kubelet, jerma-worker2 Started container agnhost-master\n" Apr 22 21:47:03.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-882' Apr 22 21:47:03.782: INFO: stderr: "" Apr 22 21:47:03.782: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-882\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: agnhost-master-pgx4x\n" Apr 22 21:47:03.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-882' Apr 22 21:47:03.900: INFO: stderr: "" Apr 22 21:47:03.900: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-882\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.97.1.67\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.196:6379\nSession Affinity: None\nEvents: \n" Apr 22 21:47:03.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' Apr 22 21:47:04.040: INFO: stderr: "" Apr 22 21:47:04.040: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:25:55 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Wed, 22 Apr 2020 21:46:54 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Wed, 22 Apr 2020 21:45:54 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 22 Apr 2020 21:45:54 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 22 Apr 2020 21:45:54 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 22 Apr 2020 21:45:54 +0000 Sun, 15 Mar 2020 18:26:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.9\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3bcfb16fe77247d3af07bed975350d5c\n System UUID: 947a2db5-5527-4203-8af5-13d97ffe8a80\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-rll5s 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 38d\n kube-system coredns-6955765f44-svxk5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 38d\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 38d\n kube-system kindnet-bjddj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 38d\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 38d\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 38d\n kube-system kube-proxy-mm9zd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 38d\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 38d\n local-path-storage local-path-provisioner-85445b74d4-7mg5w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 38d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Apr 22 21:47:04.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-882' Apr 22 21:47:04.156: INFO: stderr: "" Apr 22 21:47:04.156: INFO: stdout: "Name: kubectl-882\nLabels: e2e-framework=kubectl\n e2e-run=ba471c40-6b3c-4ca7-aac7-e12dcd8d8e88\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:47:04.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-882" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":138,"skipped":2217,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:47:04.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 22 21:47:04.262: INFO: Waiting up to 5m0s for pod "pod-526fab5f-0ed4-4b51-8699-bbf22622ef35" in namespace "emptydir-3742" to be "success or failure" Apr 22 21:47:04.278: INFO: Pod "pod-526fab5f-0ed4-4b51-8699-bbf22622ef35": Phase="Pending", Reason="", readiness=false. Elapsed: 16.204391ms Apr 22 21:47:06.320: INFO: Pod "pod-526fab5f-0ed4-4b51-8699-bbf22622ef35": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057664815s Apr 22 21:47:08.326: INFO: Pod "pod-526fab5f-0ed4-4b51-8699-bbf22622ef35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064123042s STEP: Saw pod success Apr 22 21:47:08.326: INFO: Pod "pod-526fab5f-0ed4-4b51-8699-bbf22622ef35" satisfied condition "success or failure" Apr 22 21:47:08.357: INFO: Trying to get logs from node jerma-worker pod pod-526fab5f-0ed4-4b51-8699-bbf22622ef35 container test-container: STEP: delete the pod Apr 22 21:47:08.466: INFO: Waiting for pod pod-526fab5f-0ed4-4b51-8699-bbf22622ef35 to disappear Apr 22 21:47:08.548: INFO: Pod pod-526fab5f-0ed4-4b51-8699-bbf22622ef35 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:47:08.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3742" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":139,"skipped":2218,"failed":0} SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:47:08.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-m9rr STEP: Creating a pod to test atomic-volume-subpath Apr 22 21:47:08.895: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-m9rr" in namespace "subpath-8250" to be "success or failure" Apr 22 21:47:08.907: INFO: Pod "pod-subpath-test-downwardapi-m9rr": Phase="Pending", Reason="", readiness=false. Elapsed: 11.98373ms Apr 22 21:47:10.911: INFO: Pod "pod-subpath-test-downwardapi-m9rr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015793593s Apr 22 21:47:12.918: INFO: Pod "pod-subpath-test-downwardapi-m9rr": Phase="Running", Reason="", readiness=true. Elapsed: 4.02262051s Apr 22 21:47:14.922: INFO: Pod "pod-subpath-test-downwardapi-m9rr": Phase="Running", Reason="", readiness=true. Elapsed: 6.026677524s Apr 22 21:47:16.929: INFO: Pod "pod-subpath-test-downwardapi-m9rr": Phase="Running", Reason="", readiness=true. Elapsed: 8.033854745s Apr 22 21:47:18.933: INFO: Pod "pod-subpath-test-downwardapi-m9rr": Phase="Running", Reason="", readiness=true. Elapsed: 10.037674424s Apr 22 21:47:20.948: INFO: Pod "pod-subpath-test-downwardapi-m9rr": Phase="Running", Reason="", readiness=true. Elapsed: 12.052752232s Apr 22 21:47:22.952: INFO: Pod "pod-subpath-test-downwardapi-m9rr": Phase="Running", Reason="", readiness=true. Elapsed: 14.056800784s Apr 22 21:47:24.957: INFO: Pod "pod-subpath-test-downwardapi-m9rr": Phase="Running", Reason="", readiness=true. Elapsed: 16.06139863s Apr 22 21:47:26.960: INFO: Pod "pod-subpath-test-downwardapi-m9rr": Phase="Running", Reason="", readiness=true. Elapsed: 18.06452707s Apr 22 21:47:28.996: INFO: Pod "pod-subpath-test-downwardapi-m9rr": Phase="Running", Reason="", readiness=true. Elapsed: 20.10063563s Apr 22 21:47:30.999: INFO: Pod "pod-subpath-test-downwardapi-m9rr": Phase="Running", Reason="", readiness=true. Elapsed: 22.104023499s Apr 22 21:47:33.004: INFO: Pod "pod-subpath-test-downwardapi-m9rr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.108483208s STEP: Saw pod success Apr 22 21:47:33.004: INFO: Pod "pod-subpath-test-downwardapi-m9rr" satisfied condition "success or failure" Apr 22 21:47:33.007: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-downwardapi-m9rr container test-container-subpath-downwardapi-m9rr: STEP: delete the pod Apr 22 21:47:33.054: INFO: Waiting for pod pod-subpath-test-downwardapi-m9rr to disappear Apr 22 21:47:33.064: INFO: Pod pod-subpath-test-downwardapi-m9rr no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-m9rr Apr 22 21:47:33.064: INFO: Deleting pod "pod-subpath-test-downwardapi-m9rr" in namespace "subpath-8250" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:47:33.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8250" for this suite. • [SLOW TEST:24.392 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":140,"skipped":2222,"failed":0} [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:47:33.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 22 21:47:33.193: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:47:37.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9845" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2222,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:47:37.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-3649 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 22 21:47:37.330: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 22 21:47:57.473: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.86 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3649 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 22 21:47:57.473: INFO: >>> kubeConfig: /root/.kube/config I0422 21:47:57.508193 6 log.go:172] (0xc0017da580) (0xc0027c1ae0) Create stream I0422 21:47:57.508225 6 log.go:172] (0xc0017da580) (0xc0027c1ae0) Stream added, broadcasting: 1 I0422 21:47:57.510090 6 log.go:172] (0xc0017da580) Reply frame received for 1 I0422 21:47:57.510128 6 log.go:172] (0xc0017da580) (0xc002836000) Create stream I0422 21:47:57.510141 6 log.go:172] (0xc0017da580) (0xc002836000) Stream added, broadcasting: 3 I0422 21:47:57.511127 6 log.go:172] (0xc0017da580) Reply frame received for 3 I0422 21:47:57.511149 6 log.go:172] (0xc0017da580) (0xc001d265a0) Create stream I0422 21:47:57.511156 6 log.go:172] (0xc0017da580) (0xc001d265a0) Stream added, broadcasting: 5 I0422 21:47:57.512102 6 log.go:172] (0xc0017da580) Reply frame received for 5 I0422 21:47:58.578852 6 log.go:172] (0xc0017da580) Data frame received for 3 I0422 21:47:58.578899 6 log.go:172] (0xc002836000) (3) Data frame handling I0422 21:47:58.578936 6 log.go:172] (0xc002836000) (3) Data frame sent I0422 21:47:58.578958 6 log.go:172] (0xc0017da580) Data frame received for 3 I0422 21:47:58.578977 6 log.go:172] (0xc002836000) (3) Data frame handling I0422 21:47:58.579142 6 log.go:172] (0xc0017da580) Data frame received for 5 I0422 21:47:58.579163 6 log.go:172] (0xc001d265a0) (5) Data frame handling I0422 21:47:58.581287 6 log.go:172] (0xc0017da580) Data frame received for 1 I0422 21:47:58.581331 6 log.go:172] (0xc0027c1ae0) (1) Data frame handling I0422 21:47:58.581365 6 log.go:172] (0xc0027c1ae0) (1) Data frame sent I0422 21:47:58.581399 6 log.go:172] (0xc0017da580) (0xc0027c1ae0) Stream removed, broadcasting: 1 I0422 21:47:58.581543 6 log.go:172] (0xc0017da580) (0xc0027c1ae0) Stream removed, broadcasting: 1 I0422 21:47:58.581579 6 log.go:172] (0xc0017da580) (0xc002836000) Stream removed, broadcasting: 3 I0422 21:47:58.581600 6 log.go:172] (0xc0017da580) (0xc001d265a0) Stream removed, broadcasting: 5 Apr 22 21:47:58.581: INFO: Found all expected endpoints: [netserver-0] I0422 21:47:58.581694 6 log.go:172] (0xc0017da580) Go away received Apr 22 21:47:58.585: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.198 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3649 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 22 21:47:58.585: INFO: >>> kubeConfig: /root/.kube/config I0422 21:47:58.620358 6 log.go:172] (0xc0017dac60) (0xc0027c1ea0) Create stream I0422 21:47:58.620393 6 log.go:172] (0xc0017dac60) (0xc0027c1ea0) Stream added, broadcasting: 1 I0422 21:47:58.622500 6 log.go:172] (0xc0017dac60) Reply frame received for 1 I0422 21:47:58.622541 6 log.go:172] (0xc0017dac60) (0xc001d26640) Create stream I0422 21:47:58.622556 6 log.go:172] (0xc0017dac60) (0xc001d26640) Stream added, broadcasting: 3 I0422 21:47:58.623621 6 log.go:172] (0xc0017dac60) Reply frame received for 3 I0422 21:47:58.623654 6 log.go:172] (0xc0017dac60) (0xc0019dc640) Create stream I0422 21:47:58.623667 6 log.go:172] (0xc0017dac60) (0xc0019dc640) Stream added, broadcasting: 5 I0422 21:47:58.624697 6 log.go:172] (0xc0017dac60) Reply frame received for 5 I0422 21:47:59.723358 6 log.go:172] (0xc0017dac60) Data frame received for 3 I0422 21:47:59.723404 6 log.go:172] (0xc001d26640) (3) Data frame handling I0422 21:47:59.723456 6 log.go:172] (0xc001d26640) (3) Data frame sent I0422 21:47:59.723489 6 log.go:172] (0xc0017dac60) Data frame received for 5 I0422 21:47:59.723501 6 log.go:172] (0xc0019dc640) (5) Data frame handling I0422 21:47:59.723629 6 log.go:172] (0xc0017dac60) Data frame received for 3 I0422 21:47:59.723640 6 log.go:172] (0xc001d26640) (3) Data frame handling I0422 21:47:59.725768 6 log.go:172] (0xc0017dac60) Data frame received for 1 I0422 21:47:59.725829 6 log.go:172] (0xc0027c1ea0) (1) Data frame handling I0422 21:47:59.725882 6 log.go:172] (0xc0027c1ea0) (1) Data frame sent I0422 21:47:59.725903 6 log.go:172] (0xc0017dac60) (0xc0027c1ea0) Stream removed, broadcasting: 1 I0422 21:47:59.725924 6 log.go:172] (0xc0017dac60) Go away received I0422 21:47:59.726282 6 log.go:172] (0xc0017dac60) (0xc0027c1ea0) Stream removed, broadcasting: 1 I0422 21:47:59.726319 6 log.go:172] (0xc0017dac60) (0xc001d26640) Stream removed, broadcasting: 3 I0422 21:47:59.726337 6 log.go:172] (0xc0017dac60) (0xc0019dc640) Stream removed, broadcasting: 5 Apr 22 21:47:59.726: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:47:59.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3649" for this suite. • [SLOW TEST:22.462 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":142,"skipped":2232,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:47:59.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args Apr 22 21:47:59.827: INFO: Waiting up to 5m0s for pod "var-expansion-3d6d72c7-f55e-4278-a4da-efbd340bb99e" in namespace "var-expansion-6825" to be "success or failure" Apr 22 21:47:59.870: INFO: Pod "var-expansion-3d6d72c7-f55e-4278-a4da-efbd340bb99e": Phase="Pending", Reason="", readiness=false. Elapsed: 43.161725ms Apr 22 21:48:01.875: INFO: Pod "var-expansion-3d6d72c7-f55e-4278-a4da-efbd340bb99e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047332936s Apr 22 21:48:03.878: INFO: Pod "var-expansion-3d6d72c7-f55e-4278-a4da-efbd340bb99e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051190964s STEP: Saw pod success Apr 22 21:48:03.878: INFO: Pod "var-expansion-3d6d72c7-f55e-4278-a4da-efbd340bb99e" satisfied condition "success or failure" Apr 22 21:48:03.881: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-3d6d72c7-f55e-4278-a4da-efbd340bb99e container dapi-container: STEP: delete the pod Apr 22 21:48:03.920: INFO: Waiting for pod var-expansion-3d6d72c7-f55e-4278-a4da-efbd340bb99e to disappear Apr 22 21:48:03.927: INFO: Pod var-expansion-3d6d72c7-f55e-4278-a4da-efbd340bb99e no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:48:03.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6825" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":143,"skipped":2258,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:48:03.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 22 21:48:04.023: INFO: Waiting up to 5m0s for pod "downward-api-088b19b3-acf6-4038-834a-5d6f74661f25" in namespace "downward-api-6493" to be "success or failure" Apr 22 21:48:04.053: INFO: Pod "downward-api-088b19b3-acf6-4038-834a-5d6f74661f25": Phase="Pending", Reason="", readiness=false. Elapsed: 29.778029ms Apr 22 21:48:06.057: INFO: Pod "downward-api-088b19b3-acf6-4038-834a-5d6f74661f25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033771113s Apr 22 21:48:08.061: INFO: Pod "downward-api-088b19b3-acf6-4038-834a-5d6f74661f25": Phase="Running", Reason="", readiness=true. Elapsed: 4.03777941s Apr 22 21:48:10.066: INFO: Pod "downward-api-088b19b3-acf6-4038-834a-5d6f74661f25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.0424354s STEP: Saw pod success Apr 22 21:48:10.066: INFO: Pod "downward-api-088b19b3-acf6-4038-834a-5d6f74661f25" satisfied condition "success or failure" Apr 22 21:48:10.069: INFO: Trying to get logs from node jerma-worker pod downward-api-088b19b3-acf6-4038-834a-5d6f74661f25 container dapi-container: STEP: delete the pod Apr 22 21:48:10.096: INFO: Waiting for pod downward-api-088b19b3-acf6-4038-834a-5d6f74661f25 to disappear Apr 22 21:48:10.100: INFO: Pod downward-api-088b19b3-acf6-4038-834a-5d6f74661f25 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:48:10.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6493" for this suite. • [SLOW TEST:6.175 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":144,"skipped":2270,"failed":0} [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:48:10.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-05d8d88a-264a-47dd-af74-a261934d97a7 STEP: Creating a pod to test consume secrets Apr 22 21:48:10.215: INFO: Waiting up to 5m0s for pod "pod-secrets-be264dec-7bbc-473d-8a4f-5737646235f0" in namespace "secrets-3125" to be "success or failure" Apr 22 21:48:10.232: INFO: Pod "pod-secrets-be264dec-7bbc-473d-8a4f-5737646235f0": Phase="Pending", Reason="", readiness=false. Elapsed: 16.984538ms Apr 22 21:48:12.237: INFO: Pod "pod-secrets-be264dec-7bbc-473d-8a4f-5737646235f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021374946s Apr 22 21:48:14.241: INFO: Pod "pod-secrets-be264dec-7bbc-473d-8a4f-5737646235f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02565253s STEP: Saw pod success Apr 22 21:48:14.241: INFO: Pod "pod-secrets-be264dec-7bbc-473d-8a4f-5737646235f0" satisfied condition "success or failure" Apr 22 21:48:14.244: INFO: Trying to get logs from node jerma-worker pod pod-secrets-be264dec-7bbc-473d-8a4f-5737646235f0 container secret-volume-test: STEP: delete the pod Apr 22 21:48:14.263: INFO: Waiting for pod pod-secrets-be264dec-7bbc-473d-8a4f-5737646235f0 to disappear Apr 22 21:48:14.315: INFO: Pod pod-secrets-be264dec-7bbc-473d-8a4f-5737646235f0 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:48:14.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3125" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2270,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:48:14.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-67b15b26-c075-44c5-841f-fc46855b8196 STEP: Creating a pod to test consume configMaps Apr 22 21:48:14.396: INFO: Waiting up to 5m0s for pod "pod-configmaps-37be1ce5-4fe4-4d10-b84d-7904aab6fd1d" in namespace "configmap-8761" to be "success or failure" Apr 22 21:48:14.400: INFO: Pod "pod-configmaps-37be1ce5-4fe4-4d10-b84d-7904aab6fd1d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006705ms Apr 22 21:48:16.446: INFO: Pod "pod-configmaps-37be1ce5-4fe4-4d10-b84d-7904aab6fd1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050437811s Apr 22 21:48:18.464: INFO: Pod "pod-configmaps-37be1ce5-4fe4-4d10-b84d-7904aab6fd1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067858524s STEP: Saw pod success Apr 22 21:48:18.464: INFO: Pod "pod-configmaps-37be1ce5-4fe4-4d10-b84d-7904aab6fd1d" satisfied condition "success or failure" Apr 22 21:48:18.466: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-37be1ce5-4fe4-4d10-b84d-7904aab6fd1d container configmap-volume-test: STEP: delete the pod Apr 22 21:48:18.498: INFO: Waiting for pod pod-configmaps-37be1ce5-4fe4-4d10-b84d-7904aab6fd1d to disappear Apr 22 21:48:18.514: INFO: Pod pod-configmaps-37be1ce5-4fe4-4d10-b84d-7904aab6fd1d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:48:18.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8761" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2283,"failed":0} SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:48:18.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-68h2 STEP: Creating a pod to test atomic-volume-subpath Apr 22 21:48:18.606: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-68h2" in namespace "subpath-7565" to be "success or failure" Apr 22 21:48:18.610: INFO: Pod "pod-subpath-test-projected-68h2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.479389ms Apr 22 21:48:22.044: INFO: Pod "pod-subpath-test-projected-68h2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.438217355s Apr 22 21:48:24.049: INFO: Pod "pod-subpath-test-projected-68h2": Phase="Running", Reason="", readiness=true. Elapsed: 5.442451285s Apr 22 21:48:26.053: INFO: Pod "pod-subpath-test-projected-68h2": Phase="Running", Reason="", readiness=true. Elapsed: 7.446885779s Apr 22 21:48:28.058: INFO: Pod "pod-subpath-test-projected-68h2": Phase="Running", Reason="", readiness=true. Elapsed: 9.451395512s Apr 22 21:48:30.062: INFO: Pod "pod-subpath-test-projected-68h2": Phase="Running", Reason="", readiness=true. Elapsed: 11.455517887s Apr 22 21:48:32.066: INFO: Pod "pod-subpath-test-projected-68h2": Phase="Running", Reason="", readiness=true. Elapsed: 13.459388892s Apr 22 21:48:34.070: INFO: Pod "pod-subpath-test-projected-68h2": Phase="Running", Reason="", readiness=true. Elapsed: 15.463821314s Apr 22 21:48:36.074: INFO: Pod "pod-subpath-test-projected-68h2": Phase="Running", Reason="", readiness=true. Elapsed: 17.468004316s Apr 22 21:48:38.079: INFO: Pod "pod-subpath-test-projected-68h2": Phase="Running", Reason="", readiness=true. Elapsed: 19.47232006s Apr 22 21:48:40.083: INFO: Pod "pod-subpath-test-projected-68h2": Phase="Running", Reason="", readiness=true. Elapsed: 21.476875948s Apr 22 21:48:42.088: INFO: Pod "pod-subpath-test-projected-68h2": Phase="Running", Reason="", readiness=true. Elapsed: 23.481283911s Apr 22 21:48:44.092: INFO: Pod "pod-subpath-test-projected-68h2": Phase="Running", Reason="", readiness=true. Elapsed: 25.485478781s Apr 22 21:48:46.096: INFO: Pod "pod-subpath-test-projected-68h2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.489640374s STEP: Saw pod success Apr 22 21:48:46.096: INFO: Pod "pod-subpath-test-projected-68h2" satisfied condition "success or failure" Apr 22 21:48:46.100: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-projected-68h2 container test-container-subpath-projected-68h2: STEP: delete the pod Apr 22 21:48:46.120: INFO: Waiting for pod pod-subpath-test-projected-68h2 to disappear Apr 22 21:48:46.125: INFO: Pod pod-subpath-test-projected-68h2 no longer exists STEP: Deleting pod pod-subpath-test-projected-68h2 Apr 22 21:48:46.125: INFO: Deleting pod "pod-subpath-test-projected-68h2" in namespace "subpath-7565" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:48:46.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7565" for this suite. • [SLOW TEST:27.613 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":147,"skipped":2289,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:48:46.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Apr 22 21:48:46.284: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:49:01.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9694" for this suite. • [SLOW TEST:14.996 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":148,"skipped":2314,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:49:01.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:49:08.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3866" for this suite. • [SLOW TEST:7.046 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":149,"skipped":2324,"failed":0} [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:49:08.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-101 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-101 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-101 Apr 22 21:49:08.282: INFO: Found 0 stateful pods, waiting for 1 Apr 22 21:49:18.286: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Apr 22 21:49:18.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-101 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 22 21:49:18.522: INFO: stderr: "I0422 21:49:18.419939 1998 log.go:172] (0xc0009c60b0) (0xc0002175e0) Create stream\nI0422 21:49:18.420013 1998 log.go:172] (0xc0009c60b0) (0xc0002175e0) Stream added, broadcasting: 1\nI0422 21:49:18.422420 1998 log.go:172] (0xc0009c60b0) Reply frame received for 1\nI0422 21:49:18.422471 1998 log.go:172] (0xc0009c60b0) (0xc000a4e000) Create stream\nI0422 21:49:18.422487 1998 log.go:172] (0xc0009c60b0) (0xc000a4e000) Stream added, broadcasting: 3\nI0422 21:49:18.423441 1998 log.go:172] (0xc0009c60b0) Reply frame received for 3\nI0422 21:49:18.423482 1998 log.go:172] (0xc0009c60b0) (0xc000a4e0a0) Create stream\nI0422 21:49:18.423498 1998 log.go:172] (0xc0009c60b0) (0xc000a4e0a0) Stream added, broadcasting: 5\nI0422 21:49:18.424448 1998 log.go:172] (0xc0009c60b0) Reply frame received for 5\nI0422 21:49:18.485425 1998 log.go:172] (0xc0009c60b0) Data frame received for 5\nI0422 21:49:18.485451 1998 log.go:172] (0xc000a4e0a0) (5) Data frame handling\nI0422 21:49:18.485472 1998 log.go:172] (0xc000a4e0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0422 21:49:18.514477 1998 log.go:172] (0xc0009c60b0) Data frame received for 3\nI0422 21:49:18.514498 1998 log.go:172] (0xc000a4e000) (3) Data frame handling\nI0422 21:49:18.514524 1998 log.go:172] (0xc000a4e000) (3) Data frame sent\nI0422 21:49:18.514761 1998 log.go:172] (0xc0009c60b0) Data frame received for 3\nI0422 21:49:18.514781 1998 log.go:172] (0xc000a4e000) (3) Data frame handling\nI0422 21:49:18.515034 1998 log.go:172] (0xc0009c60b0) Data frame received for 5\nI0422 21:49:18.515044 1998 log.go:172] (0xc000a4e0a0) (5) Data frame handling\nI0422 21:49:18.516603 1998 log.go:172] (0xc0009c60b0) Data frame received for 1\nI0422 21:49:18.516623 1998 log.go:172] (0xc0002175e0) (1) Data frame handling\nI0422 21:49:18.516638 1998 log.go:172] (0xc0002175e0) (1) Data frame sent\nI0422 21:49:18.516655 1998 log.go:172] (0xc0009c60b0) (0xc0002175e0) Stream removed, broadcasting: 1\nI0422 21:49:18.516785 1998 log.go:172] (0xc0009c60b0) Go away received\nI0422 21:49:18.516983 1998 log.go:172] (0xc0009c60b0) (0xc0002175e0) Stream removed, broadcasting: 1\nI0422 21:49:18.516997 1998 log.go:172] (0xc0009c60b0) (0xc000a4e000) Stream removed, broadcasting: 3\nI0422 21:49:18.517003 1998 log.go:172] (0xc0009c60b0) (0xc000a4e0a0) Stream removed, broadcasting: 5\n" Apr 22 21:49:18.523: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 22 21:49:18.523: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 22 21:49:18.526: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 22 21:49:28.530: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 22 21:49:28.531: INFO: Waiting for statefulset status.replicas updated to 0 Apr 22 21:49:28.545: INFO: POD NODE PHASE GRACE CONDITIONS Apr 22 21:49:28.545: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:10 +0000 UTC }] Apr 22 21:49:28.545: INFO: Apr 22 21:49:28.545: INFO: StatefulSet ss has not reached scale 3, at 1 Apr 22 21:49:29.550: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.994411184s Apr 22 21:49:30.554: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.989687642s Apr 22 21:49:31.597: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.986055695s Apr 22 21:49:32.602: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.942476534s Apr 22 21:49:33.607: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.937328409s Apr 22 21:49:34.611: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.932964347s Apr 22 21:49:35.616: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.92835498s Apr 22 21:49:36.621: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.923808142s Apr 22 21:49:37.626: INFO: Verifying statefulset ss doesn't scale past 3 for another 918.815911ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-101 Apr 22 21:49:38.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-101 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 22 21:49:38.857: INFO: stderr: "I0422 21:49:38.761792 2018 log.go:172] (0xc000a66000) (0xc000a380a0) Create stream\nI0422 21:49:38.761879 2018 log.go:172] (0xc000a66000) (0xc000a380a0) Stream added, broadcasting: 1\nI0422 21:49:38.764818 2018 log.go:172] (0xc000a66000) Reply frame received for 1\nI0422 21:49:38.764843 2018 log.go:172] (0xc000a66000) (0xc000a38140) Create stream\nI0422 21:49:38.764850 2018 log.go:172] (0xc000a66000) (0xc000a38140) Stream added, broadcasting: 3\nI0422 21:49:38.766007 2018 log.go:172] (0xc000a66000) Reply frame received for 3\nI0422 21:49:38.766042 2018 log.go:172] (0xc000a66000) (0xc000a381e0) Create stream\nI0422 21:49:38.766053 2018 log.go:172] (0xc000a66000) (0xc000a381e0) Stream added, broadcasting: 5\nI0422 21:49:38.767133 2018 log.go:172] (0xc000a66000) Reply frame received for 5\nI0422 21:49:38.847046 2018 log.go:172] (0xc000a66000) Data frame received for 3\nI0422 21:49:38.847080 2018 log.go:172] (0xc000a38140) (3) Data frame handling\nI0422 21:49:38.847088 2018 log.go:172] (0xc000a38140) (3) Data frame sent\nI0422 21:49:38.847112 2018 log.go:172] (0xc000a66000) Data frame received for 5\nI0422 21:49:38.847124 2018 log.go:172] (0xc000a381e0) (5) Data frame handling\nI0422 21:49:38.847135 2018 log.go:172] (0xc000a381e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0422 21:49:38.847265 2018 log.go:172] (0xc000a66000) Data frame received for 3\nI0422 21:49:38.847285 2018 log.go:172] (0xc000a38140) (3) Data frame handling\nI0422 21:49:38.847511 2018 log.go:172] (0xc000a66000) Data frame received for 5\nI0422 21:49:38.847541 2018 log.go:172] (0xc000a381e0) (5) Data frame handling\nI0422 21:49:38.849625 2018 log.go:172] (0xc000a66000) Data frame received for 1\nI0422 21:49:38.849650 2018 log.go:172] (0xc000a380a0) (1) Data frame handling\nI0422 21:49:38.849663 2018 log.go:172] (0xc000a380a0) (1) Data frame sent\nI0422 21:49:38.849679 2018 log.go:172] (0xc000a66000) (0xc000a380a0) Stream removed, broadcasting: 1\nI0422 21:49:38.849697 2018 log.go:172] (0xc000a66000) Go away received\nI0422 21:49:38.850106 2018 log.go:172] (0xc000a66000) (0xc000a380a0) Stream removed, broadcasting: 1\nI0422 21:49:38.850130 2018 log.go:172] (0xc000a66000) (0xc000a38140) Stream removed, broadcasting: 3\nI0422 21:49:38.850140 2018 log.go:172] (0xc000a66000) (0xc000a381e0) Stream removed, broadcasting: 5\n" Apr 22 21:49:38.857: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 22 21:49:38.857: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 22 21:49:38.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 22 21:49:39.062: INFO: stderr: "I0422 21:49:38.981593 2038 log.go:172] (0xc000a94dc0) (0xc0009cc3c0) Create stream\nI0422 21:49:38.981671 2038 log.go:172] (0xc000a94dc0) (0xc0009cc3c0) Stream added, broadcasting: 1\nI0422 21:49:38.984747 2038 log.go:172] (0xc000a94dc0) Reply frame received for 1\nI0422 21:49:38.984790 2038 log.go:172] (0xc000a94dc0) (0xc0008f83c0) Create stream\nI0422 21:49:38.984806 2038 log.go:172] (0xc000a94dc0) (0xc0008f83c0) Stream added, broadcasting: 3\nI0422 21:49:38.985874 2038 log.go:172] (0xc000a94dc0) Reply frame received for 3\nI0422 21:49:38.985907 2038 log.go:172] (0xc000a94dc0) (0xc0008f8460) Create stream\nI0422 21:49:38.985917 2038 log.go:172] (0xc000a94dc0) (0xc0008f8460) Stream added, broadcasting: 5\nI0422 21:49:38.986940 2038 log.go:172] (0xc000a94dc0) Reply frame received for 5\nI0422 21:49:39.054586 2038 log.go:172] (0xc000a94dc0) Data frame received for 5\nI0422 21:49:39.054645 2038 log.go:172] (0xc0008f8460) (5) Data frame handling\nI0422 21:49:39.054664 2038 log.go:172] (0xc0008f8460) (5) Data frame sent\nI0422 21:49:39.054680 2038 log.go:172] (0xc000a94dc0) Data frame received for 5\nI0422 21:49:39.054691 2038 log.go:172] (0xc0008f8460) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0422 21:49:39.055176 2038 log.go:172] (0xc000a94dc0) Data frame received for 3\nI0422 21:49:39.055208 2038 log.go:172] (0xc0008f83c0) (3) Data frame handling\nI0422 21:49:39.055264 2038 log.go:172] (0xc0008f83c0) (3) Data frame sent\nI0422 21:49:39.055277 2038 log.go:172] (0xc000a94dc0) Data frame received for 3\nI0422 21:49:39.055288 2038 log.go:172] (0xc0008f83c0) (3) Data frame handling\nI0422 21:49:39.057803 2038 log.go:172] (0xc000a94dc0) Data frame received for 1\nI0422 21:49:39.057821 2038 log.go:172] (0xc0009cc3c0) (1) Data frame handling\nI0422 21:49:39.057840 2038 log.go:172] (0xc0009cc3c0) (1) Data frame sent\nI0422 21:49:39.057855 2038 log.go:172] (0xc000a94dc0) (0xc0009cc3c0) Stream removed, broadcasting: 1\nI0422 21:49:39.057930 2038 log.go:172] (0xc000a94dc0) Go away received\nI0422 21:49:39.058111 2038 log.go:172] (0xc000a94dc0) (0xc0009cc3c0) Stream removed, broadcasting: 1\nI0422 21:49:39.058128 2038 log.go:172] (0xc000a94dc0) (0xc0008f83c0) Stream removed, broadcasting: 3\nI0422 21:49:39.058135 2038 log.go:172] (0xc000a94dc0) (0xc0008f8460) Stream removed, broadcasting: 5\n" Apr 22 21:49:39.062: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 22 21:49:39.062: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 22 21:49:39.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-101 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 22 21:49:39.235: INFO: stderr: "I0422 21:49:39.171648 2058 log.go:172] (0xc0008ce6e0) (0xc000617a40) Create stream\nI0422 21:49:39.171703 2058 log.go:172] (0xc0008ce6e0) (0xc000617a40) Stream added, broadcasting: 1\nI0422 21:49:39.173769 2058 log.go:172] (0xc0008ce6e0) Reply frame received for 1\nI0422 21:49:39.173802 2058 log.go:172] (0xc0008ce6e0) (0xc00097c000) Create stream\nI0422 21:49:39.173812 2058 log.go:172] (0xc0008ce6e0) (0xc00097c000) Stream added, broadcasting: 3\nI0422 21:49:39.174411 2058 log.go:172] (0xc0008ce6e0) Reply frame received for 3\nI0422 21:49:39.174452 2058 log.go:172] (0xc0008ce6e0) (0xc000b9e000) Create stream\nI0422 21:49:39.174464 2058 log.go:172] (0xc0008ce6e0) (0xc000b9e000) Stream added, broadcasting: 5\nI0422 21:49:39.175199 2058 log.go:172] (0xc0008ce6e0) Reply frame received for 5\nI0422 21:49:39.230269 2058 log.go:172] (0xc0008ce6e0) Data frame received for 3\nI0422 21:49:39.230371 2058 log.go:172] (0xc00097c000) (3) Data frame handling\nI0422 21:49:39.230401 2058 log.go:172] (0xc00097c000) (3) Data frame sent\nI0422 21:49:39.230410 2058 log.go:172] (0xc0008ce6e0) Data frame received for 3\nI0422 21:49:39.230415 2058 log.go:172] (0xc00097c000) (3) Data frame handling\nI0422 21:49:39.230448 2058 log.go:172] (0xc0008ce6e0) Data frame received for 5\nI0422 21:49:39.230470 2058 log.go:172] (0xc000b9e000) (5) Data frame handling\nI0422 21:49:39.230483 2058 log.go:172] (0xc000b9e000) (5) Data frame sent\nI0422 21:49:39.230497 2058 log.go:172] (0xc0008ce6e0) Data frame received for 5\nI0422 21:49:39.230508 2058 log.go:172] (0xc000b9e000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0422 21:49:39.231826 2058 log.go:172] (0xc0008ce6e0) Data frame received for 1\nI0422 21:49:39.231846 2058 log.go:172] (0xc000617a40) (1) Data frame handling\nI0422 21:49:39.231858 2058 log.go:172] (0xc000617a40) (1) Data frame sent\nI0422 21:49:39.231941 2058 log.go:172] (0xc0008ce6e0) (0xc000617a40) Stream removed, broadcasting: 1\nI0422 21:49:39.232011 2058 log.go:172] (0xc0008ce6e0) Go away received\nI0422 21:49:39.232294 2058 log.go:172] (0xc0008ce6e0) (0xc000617a40) Stream removed, broadcasting: 1\nI0422 21:49:39.232313 2058 log.go:172] (0xc0008ce6e0) (0xc00097c000) Stream removed, broadcasting: 3\nI0422 21:49:39.232325 2058 log.go:172] (0xc0008ce6e0) (0xc000b9e000) Stream removed, broadcasting: 5\n" Apr 22 21:49:39.235: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 22 21:49:39.235: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 22 21:49:39.253: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 22 21:49:39.253: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 22 21:49:39.253: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Apr 22 21:49:39.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-101 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 22 21:49:39.483: INFO: stderr: "I0422 21:49:39.402465 2078 log.go:172] (0xc0000f4e70) (0xc0009861e0) Create stream\nI0422 21:49:39.402523 2078 log.go:172] (0xc0000f4e70) (0xc0009861e0) Stream added, broadcasting: 1\nI0422 21:49:39.405298 2078 log.go:172] (0xc0000f4e70) Reply frame received for 1\nI0422 21:49:39.405357 2078 log.go:172] (0xc0000f4e70) (0xc000709680) Create stream\nI0422 21:49:39.405375 2078 log.go:172] (0xc0000f4e70) (0xc000709680) Stream added, broadcasting: 3\nI0422 21:49:39.406485 2078 log.go:172] (0xc0000f4e70) Reply frame received for 3\nI0422 21:49:39.406514 2078 log.go:172] (0xc0000f4e70) (0xc000986280) Create stream\nI0422 21:49:39.406523 2078 log.go:172] (0xc0000f4e70) (0xc000986280) Stream added, broadcasting: 5\nI0422 21:49:39.407809 2078 log.go:172] (0xc0000f4e70) Reply frame received for 5\nI0422 21:49:39.476125 2078 log.go:172] (0xc0000f4e70) Data frame received for 3\nI0422 21:49:39.476161 2078 log.go:172] (0xc000709680) (3) Data frame handling\nI0422 21:49:39.476179 2078 log.go:172] (0xc000709680) (3) Data frame sent\nI0422 21:49:39.476195 2078 log.go:172] (0xc0000f4e70) Data frame received for 3\nI0422 21:49:39.476212 2078 log.go:172] (0xc000709680) (3) Data frame handling\nI0422 21:49:39.476235 2078 log.go:172] (0xc0000f4e70) Data frame received for 5\nI0422 21:49:39.476245 2078 log.go:172] (0xc000986280) (5) Data frame handling\nI0422 21:49:39.476259 2078 log.go:172] (0xc000986280) (5) Data frame sent\nI0422 21:49:39.476274 2078 log.go:172] (0xc0000f4e70) Data frame received for 5\nI0422 21:49:39.476286 2078 log.go:172] (0xc000986280) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0422 21:49:39.477987 2078 log.go:172] (0xc0000f4e70) Data frame received for 1\nI0422 21:49:39.478009 2078 log.go:172] (0xc0009861e0) (1) Data frame handling\nI0422 21:49:39.478018 2078 log.go:172] (0xc0009861e0) (1) Data frame sent\nI0422 21:49:39.478068 2078 log.go:172] (0xc0000f4e70) (0xc0009861e0) Stream removed, broadcasting: 1\nI0422 21:49:39.478127 2078 log.go:172] (0xc0000f4e70) Go away received\nI0422 21:49:39.478478 2078 log.go:172] (0xc0000f4e70) (0xc0009861e0) Stream removed, broadcasting: 1\nI0422 21:49:39.478505 2078 log.go:172] (0xc0000f4e70) (0xc000709680) Stream removed, broadcasting: 3\nI0422 21:49:39.478521 2078 log.go:172] (0xc0000f4e70) (0xc000986280) Stream removed, broadcasting: 5\n" Apr 22 21:49:39.483: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 22 21:49:39.483: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 22 21:49:39.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-101 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 22 21:49:39.710: INFO: stderr: "I0422 21:49:39.616754 2099 log.go:172] (0xc000a42b00) (0xc0005001e0) Create stream\nI0422 21:49:39.616833 2099 log.go:172] (0xc000a42b00) (0xc0005001e0) Stream added, broadcasting: 1\nI0422 21:49:39.619661 2099 log.go:172] (0xc000a42b00) Reply frame received for 1\nI0422 21:49:39.619712 2099 log.go:172] (0xc000a42b00) (0xc000860000) Create stream\nI0422 21:49:39.619731 2099 log.go:172] (0xc000a42b00) (0xc000860000) Stream added, broadcasting: 3\nI0422 21:49:39.620813 2099 log.go:172] (0xc000a42b00) Reply frame received for 3\nI0422 21:49:39.620855 2099 log.go:172] (0xc000a42b00) (0xc0008600a0) Create stream\nI0422 21:49:39.620867 2099 log.go:172] (0xc000a42b00) (0xc0008600a0) Stream added, broadcasting: 5\nI0422 21:49:39.622110 2099 log.go:172] (0xc000a42b00) Reply frame received for 5\nI0422 21:49:39.675761 2099 log.go:172] (0xc000a42b00) Data frame received for 5\nI0422 21:49:39.675793 2099 log.go:172] (0xc0008600a0) (5) Data frame handling\nI0422 21:49:39.675829 2099 log.go:172] (0xc0008600a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0422 21:49:39.702624 2099 log.go:172] (0xc000a42b00) Data frame received for 3\nI0422 21:49:39.702671 2099 log.go:172] (0xc000860000) (3) Data frame handling\nI0422 21:49:39.702700 2099 log.go:172] (0xc000860000) (3) Data frame sent\nI0422 21:49:39.702717 2099 log.go:172] (0xc000a42b00) Data frame received for 3\nI0422 21:49:39.702731 2099 log.go:172] (0xc000860000) (3) Data frame handling\nI0422 21:49:39.702902 2099 log.go:172] (0xc000a42b00) Data frame received for 5\nI0422 21:49:39.702952 2099 log.go:172] (0xc0008600a0) (5) Data frame handling\nI0422 21:49:39.704461 2099 log.go:172] (0xc000a42b00) Data frame received for 1\nI0422 21:49:39.704483 2099 log.go:172] (0xc0005001e0) (1) Data frame handling\nI0422 21:49:39.704507 2099 log.go:172] (0xc0005001e0) (1) Data frame sent\nI0422 21:49:39.704532 2099 log.go:172] (0xc000a42b00) (0xc0005001e0) Stream removed, broadcasting: 1\nI0422 21:49:39.704660 2099 log.go:172] (0xc000a42b00) Go away received\nI0422 21:49:39.704912 2099 log.go:172] (0xc000a42b00) (0xc0005001e0) Stream removed, broadcasting: 1\nI0422 21:49:39.704935 2099 log.go:172] (0xc000a42b00) (0xc000860000) Stream removed, broadcasting: 3\nI0422 21:49:39.704946 2099 log.go:172] (0xc000a42b00) (0xc0008600a0) Stream removed, broadcasting: 5\n" Apr 22 21:49:39.710: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 22 21:49:39.710: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 22 21:49:39.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-101 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 22 21:49:39.963: INFO: stderr: "I0422 21:49:39.860943 2119 log.go:172] (0xc000104dc0) (0xc000308000) Create stream\nI0422 21:49:39.861017 2119 log.go:172] (0xc000104dc0) (0xc000308000) Stream added, broadcasting: 1\nI0422 21:49:39.862967 2119 log.go:172] (0xc000104dc0) Reply frame received for 1\nI0422 21:49:39.862995 2119 log.go:172] (0xc000104dc0) (0xc0005e39a0) Create stream\nI0422 21:49:39.863003 2119 log.go:172] (0xc000104dc0) (0xc0005e39a0) Stream added, broadcasting: 3\nI0422 21:49:39.863629 2119 log.go:172] (0xc000104dc0) Reply frame received for 3\nI0422 21:49:39.863656 2119 log.go:172] (0xc000104dc0) (0xc000308140) Create stream\nI0422 21:49:39.863663 2119 log.go:172] (0xc000104dc0) (0xc000308140) Stream added, broadcasting: 5\nI0422 21:49:39.864454 2119 log.go:172] (0xc000104dc0) Reply frame received for 5\nI0422 21:49:39.934096 2119 log.go:172] (0xc000104dc0) Data frame received for 5\nI0422 21:49:39.934142 2119 log.go:172] (0xc000308140) (5) Data frame handling\nI0422 21:49:39.934165 2119 log.go:172] (0xc000308140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0422 21:49:39.956831 2119 log.go:172] (0xc000104dc0) Data frame received for 3\nI0422 21:49:39.956877 2119 log.go:172] (0xc0005e39a0) (3) Data frame handling\nI0422 21:49:39.956901 2119 log.go:172] (0xc0005e39a0) (3) Data frame sent\nI0422 21:49:39.956931 2119 log.go:172] (0xc000104dc0) Data frame received for 5\nI0422 21:49:39.956950 2119 log.go:172] (0xc000308140) (5) Data frame handling\nI0422 21:49:39.957364 2119 log.go:172] (0xc000104dc0) Data frame received for 3\nI0422 21:49:39.957416 2119 log.go:172] (0xc0005e39a0) (3) Data frame handling\nI0422 21:49:39.959156 2119 log.go:172] (0xc000104dc0) Data frame received for 1\nI0422 21:49:39.959179 2119 log.go:172] (0xc000308000) (1) Data frame handling\nI0422 21:49:39.959195 2119 log.go:172] (0xc000308000) (1) Data frame sent\nI0422 21:49:39.959231 2119 log.go:172] (0xc000104dc0) (0xc000308000) Stream removed, broadcasting: 1\nI0422 21:49:39.959270 2119 log.go:172] (0xc000104dc0) Go away received\nI0422 21:49:39.959744 2119 log.go:172] (0xc000104dc0) (0xc000308000) Stream removed, broadcasting: 1\nI0422 21:49:39.959770 2119 log.go:172] (0xc000104dc0) (0xc0005e39a0) Stream removed, broadcasting: 3\nI0422 21:49:39.959782 2119 log.go:172] (0xc000104dc0) (0xc000308140) Stream removed, broadcasting: 5\n" Apr 22 21:49:39.963: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 22 21:49:39.963: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 22 21:49:39.963: INFO: Waiting for statefulset status.replicas updated to 0 Apr 22 21:49:39.966: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 22 21:49:49.983: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 22 21:49:49.983: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 22 21:49:49.983: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 22 21:49:49.995: INFO: POD NODE PHASE GRACE CONDITIONS Apr 22 21:49:49.995: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:10 +0000 UTC }] Apr 22 21:49:49.996: INFO: ss-1 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:28 +0000 UTC }] Apr 22 21:49:49.996: INFO: ss-2 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:28 +0000 UTC }] Apr 22 21:49:49.996: INFO: Apr 22 21:49:49.996: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 22 21:49:51.002: INFO: POD NODE PHASE GRACE CONDITIONS Apr 22 21:49:51.002: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:10 +0000 UTC }] Apr 22 21:49:51.002: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:28 +0000 UTC }] Apr 22 21:49:51.002: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:28 +0000 UTC }] Apr 22 21:49:51.002: INFO: Apr 22 21:49:51.002: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 22 21:49:52.007: INFO: POD NODE PHASE GRACE CONDITIONS Apr 22 21:49:52.007: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:10 +0000 UTC }] Apr 22 21:49:52.007: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:28 +0000 UTC }] Apr 22 21:49:52.007: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:28 +0000 UTC }] Apr 22 21:49:52.007: INFO: Apr 22 21:49:52.007: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 22 21:49:53.013: INFO: POD NODE PHASE GRACE CONDITIONS Apr 22 21:49:53.013: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:10 +0000 UTC }] Apr 22 21:49:53.013: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:28 +0000 UTC }] Apr 22 21:49:53.013: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:28 +0000 UTC }] Apr 22 21:49:53.013: INFO: Apr 22 21:49:53.013: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 22 21:49:54.018: INFO: POD NODE PHASE GRACE CONDITIONS Apr 22 21:49:54.018: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:10 +0000 UTC }] Apr 22 21:49:54.018: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:28 +0000 UTC }] Apr 22 21:49:54.018: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:28 +0000 UTC }] Apr 22 21:49:54.018: INFO: Apr 22 21:49:54.018: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 22 21:49:55.023: INFO: POD NODE PHASE GRACE CONDITIONS Apr 22 21:49:55.023: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:10 +0000 UTC }] Apr 22 21:49:55.023: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:28 +0000 UTC }] Apr 22 21:49:55.023: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:28 +0000 UTC }] Apr 22 21:49:55.023: INFO: Apr 22 21:49:55.023: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 22 21:49:56.027: INFO: POD NODE PHASE GRACE CONDITIONS Apr 22 21:49:56.027: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:10 +0000 UTC }] Apr 22 21:49:56.027: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:28 +0000 UTC }] Apr 22 21:49:56.027: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:28 +0000 UTC }] Apr 22 21:49:56.027: INFO: Apr 22 21:49:56.027: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 22 21:49:57.032: INFO: POD NODE PHASE GRACE CONDITIONS Apr 22 21:49:57.032: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:10 +0000 UTC }] Apr 22 21:49:57.032: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:28 +0000 UTC }] Apr 22 21:49:57.032: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:28 +0000 UTC }] Apr 22 21:49:57.032: INFO: Apr 22 21:49:57.032: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 22 21:49:58.037: INFO: POD NODE PHASE GRACE CONDITIONS Apr 22 21:49:58.037: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:10 +0000 UTC }] Apr 22 21:49:58.037: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:28 +0000 UTC }] Apr 22 21:49:58.037: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:28 +0000 UTC }] Apr 22 21:49:58.037: INFO: Apr 22 21:49:58.037: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 22 21:49:59.042: INFO: POD NODE PHASE GRACE CONDITIONS Apr 22 21:49:59.042: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:10 +0000 UTC }] Apr 22 21:49:59.042: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:28 +0000 UTC }] Apr 22 21:49:59.042: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-22 21:49:28 +0000 UTC }] Apr 22 21:49:59.042: INFO: Apr 22 21:49:59.042: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-101 Apr 22 21:50:00.045: INFO: Scaling statefulset ss to 0 Apr 22 21:50:00.054: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 22 21:50:00.057: INFO: Deleting all statefulset in ns statefulset-101 Apr 22 21:50:00.059: INFO: Scaling statefulset ss to 0 Apr 22 21:50:00.067: INFO: Waiting for statefulset status.replicas updated to 0 Apr 22 21:50:00.069: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:50:00.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-101" for this suite. • [SLOW TEST:51.923 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":150,"skipped":2324,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:50:00.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-241eb852-57c4-43d3-b2a2-8292963deb7f [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:50:00.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1740" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":151,"skipped":2335,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:50:00.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 22 21:50:00.227: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:50:01.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2801" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":152,"skipped":2338,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:50:01.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0422 21:50:41.756092 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 22 21:50:41.756: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:50:41.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2903" for this suite. • [SLOW TEST:40.487 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":153,"skipped":2355,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:50:41.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-9cf93256-7ad7-455d-941d-047d789e2dba STEP: Creating a pod to test consume configMaps Apr 22 21:50:42.502: INFO: Waiting up to 5m0s for pod "pod-configmaps-40c2ed72-84d3-402f-87bc-040665bc9de0" in namespace "configmap-4819" to be "success or failure" Apr 22 21:50:42.518: INFO: Pod "pod-configmaps-40c2ed72-84d3-402f-87bc-040665bc9de0": Phase="Pending", Reason="", readiness=false. Elapsed: 15.487837ms Apr 22 21:50:44.525: INFO: Pod "pod-configmaps-40c2ed72-84d3-402f-87bc-040665bc9de0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022891646s Apr 22 21:50:46.529: INFO: Pod "pod-configmaps-40c2ed72-84d3-402f-87bc-040665bc9de0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026558272s STEP: Saw pod success Apr 22 21:50:46.529: INFO: Pod "pod-configmaps-40c2ed72-84d3-402f-87bc-040665bc9de0" satisfied condition "success or failure" Apr 22 21:50:46.531: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-40c2ed72-84d3-402f-87bc-040665bc9de0 container configmap-volume-test: STEP: delete the pod Apr 22 21:50:46.596: INFO: Waiting for pod pod-configmaps-40c2ed72-84d3-402f-87bc-040665bc9de0 to disappear Apr 22 21:50:46.607: INFO: Pod pod-configmaps-40c2ed72-84d3-402f-87bc-040665bc9de0 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:50:46.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4819" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":154,"skipped":2369,"failed":0} S ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:50:46.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 22 21:50:46.695: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-a4e38da2-eda5-40db-8199-219b7d0b6999" in namespace "security-context-test-5598" to be "success or failure" Apr 22 21:50:46.703: INFO: Pod "alpine-nnp-false-a4e38da2-eda5-40db-8199-219b7d0b6999": Phase="Pending", Reason="", readiness=false. Elapsed: 7.802126ms Apr 22 21:50:48.730: INFO: Pod "alpine-nnp-false-a4e38da2-eda5-40db-8199-219b7d0b6999": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034347619s Apr 22 21:50:50.751: INFO: Pod "alpine-nnp-false-a4e38da2-eda5-40db-8199-219b7d0b6999": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055705365s Apr 22 21:50:52.755: INFO: Pod "alpine-nnp-false-a4e38da2-eda5-40db-8199-219b7d0b6999": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.059595771s Apr 22 21:50:52.755: INFO: Pod "alpine-nnp-false-a4e38da2-eda5-40db-8199-219b7d0b6999" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:50:52.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5598" for this suite. • [SLOW TEST:6.203 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when creating containers with AllowPrivilegeEscalation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":155,"skipped":2370,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:50:52.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 22 21:50:52.874: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 22 21:50:52.890: INFO: Waiting for terminating namespaces to be deleted... Apr 22 21:50:52.894: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 22 21:50:52.899: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 22 21:50:52.899: INFO: Container kindnet-cni ready: true, restart count 0 Apr 22 21:50:52.899: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 22 21:50:52.899: INFO: Container kube-proxy ready: true, restart count 0 Apr 22 21:50:52.899: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 22 21:50:52.905: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 22 21:50:52.905: INFO: Container kube-proxy ready: true, restart count 0 Apr 22 21:50:52.905: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 22 21:50:52.905: INFO: Container kube-hunter ready: false, restart count 0 Apr 22 21:50:52.905: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 22 21:50:52.905: INFO: Container kindnet-cni ready: true, restart count 0 Apr 22 21:50:52.905: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 22 21:50:52.905: INFO: Container kube-bench ready: false, restart count 0 Apr 22 21:50:52.905: INFO: alpine-nnp-false-a4e38da2-eda5-40db-8199-219b7d0b6999 from security-context-test-5598 started at 2020-04-22 21:50:46 +0000 UTC (1 container statuses recorded) Apr 22 21:50:52.905: INFO: Container alpine-nnp-false-a4e38da2-eda5-40db-8199-219b7d0b6999 ready: false, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-5dcc70de-e8b8-475f-a085-a739b077a98c 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-5dcc70de-e8b8-475f-a085-a739b077a98c off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-5dcc70de-e8b8-475f-a085-a739b077a98c [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:56:01.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1706" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:308.244 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":156,"skipped":2385,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:56:01.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:56:18.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9340" for this suite. • [SLOW TEST:17.147 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":157,"skipped":2387,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:56:18.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 22 21:56:18.280: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5305a397-1c0b-4c58-b374-d1e9c8a5e918" in namespace "projected-976" to be "success or failure" Apr 22 21:56:18.288: INFO: Pod "downwardapi-volume-5305a397-1c0b-4c58-b374-d1e9c8a5e918": Phase="Pending", Reason="", readiness=false. Elapsed: 7.976265ms Apr 22 21:56:20.293: INFO: Pod "downwardapi-volume-5305a397-1c0b-4c58-b374-d1e9c8a5e918": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012684723s Apr 22 21:56:22.298: INFO: Pod "downwardapi-volume-5305a397-1c0b-4c58-b374-d1e9c8a5e918": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017357538s STEP: Saw pod success Apr 22 21:56:22.298: INFO: Pod "downwardapi-volume-5305a397-1c0b-4c58-b374-d1e9c8a5e918" satisfied condition "success or failure" Apr 22 21:56:22.305: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-5305a397-1c0b-4c58-b374-d1e9c8a5e918 container client-container: STEP: delete the pod Apr 22 21:56:22.347: INFO: Waiting for pod downwardapi-volume-5305a397-1c0b-4c58-b374-d1e9c8a5e918 to disappear Apr 22 21:56:22.362: INFO: Pod downwardapi-volume-5305a397-1c0b-4c58-b374-d1e9c8a5e918 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:56:22.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-976" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":158,"skipped":2412,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:56:22.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 22 21:56:22.460: INFO: Waiting up to 5m0s for pod "downwardapi-volume-85bdb41c-a1fa-4d5a-9db1-a14b57f0520f" in namespace "projected-1950" to be "success or failure" Apr 22 21:56:22.464: INFO: Pod "downwardapi-volume-85bdb41c-a1fa-4d5a-9db1-a14b57f0520f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.617115ms Apr 22 21:56:24.468: INFO: Pod "downwardapi-volume-85bdb41c-a1fa-4d5a-9db1-a14b57f0520f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007305971s Apr 22 21:56:26.472: INFO: Pod "downwardapi-volume-85bdb41c-a1fa-4d5a-9db1-a14b57f0520f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0116452s STEP: Saw pod success Apr 22 21:56:26.472: INFO: Pod "downwardapi-volume-85bdb41c-a1fa-4d5a-9db1-a14b57f0520f" satisfied condition "success or failure" Apr 22 21:56:26.475: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-85bdb41c-a1fa-4d5a-9db1-a14b57f0520f container client-container: STEP: delete the pod Apr 22 21:56:26.520: INFO: Waiting for pod downwardapi-volume-85bdb41c-a1fa-4d5a-9db1-a14b57f0520f to disappear Apr 22 21:56:26.536: INFO: Pod downwardapi-volume-85bdb41c-a1fa-4d5a-9db1-a14b57f0520f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:56:26.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1950" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":159,"skipped":2490,"failed":0} SSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:56:26.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-6565/configmap-test-8a4522c6-26e0-45e3-88ca-4493028a1e57 STEP: Creating a pod to test consume configMaps Apr 22 21:56:26.620: INFO: Waiting up to 5m0s for pod "pod-configmaps-a10ecd09-fc99-4d5e-b2fb-ab59de5482bd" in namespace "configmap-6565" to be "success or failure" Apr 22 21:56:26.626: INFO: Pod "pod-configmaps-a10ecd09-fc99-4d5e-b2fb-ab59de5482bd": Phase="Pending", Reason="", readiness=false. Elapsed: 5.6582ms Apr 22 21:56:28.630: INFO: Pod "pod-configmaps-a10ecd09-fc99-4d5e-b2fb-ab59de5482bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009509313s Apr 22 21:56:30.634: INFO: Pod "pod-configmaps-a10ecd09-fc99-4d5e-b2fb-ab59de5482bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013792436s STEP: Saw pod success Apr 22 21:56:30.634: INFO: Pod "pod-configmaps-a10ecd09-fc99-4d5e-b2fb-ab59de5482bd" satisfied condition "success or failure" Apr 22 21:56:30.638: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-a10ecd09-fc99-4d5e-b2fb-ab59de5482bd container env-test: STEP: delete the pod Apr 22 21:56:30.677: INFO: Waiting for pod pod-configmaps-a10ecd09-fc99-4d5e-b2fb-ab59de5482bd to disappear Apr 22 21:56:30.695: INFO: Pod pod-configmaps-a10ecd09-fc99-4d5e-b2fb-ab59de5482bd no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:56:30.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6565" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":160,"skipped":2493,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:56:30.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 22 21:56:30.803: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 22 21:56:30.815: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 22 21:56:35.836: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 22 21:56:35.836: INFO: Creating deployment "test-rolling-update-deployment" Apr 22 21:56:35.845: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 22 21:56:35.871: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 22 21:56:37.879: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 22 21:56:37.882: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723189395, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723189395, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723189395, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723189395, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 21:56:39.902: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723189395, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723189395, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723189395, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723189395, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 21:56:41.894: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 22 21:56:41.929: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-6467 /apis/apps/v1/namespaces/deployment-6467/deployments/test-rolling-update-deployment 42dfa09c-299c-47ef-8cc8-4cf804b608b1 10231547 1 2020-04-22 21:56:35 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002e63ff8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-22 21:56:35 +0000 UTC,LastTransitionTime:2020-04-22 21:56:35 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-04-22 21:56:39 +0000 UTC,LastTransitionTime:2020-04-22 21:56:35 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 22 21:56:41.931: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-6467 /apis/apps/v1/namespaces/deployment-6467/replicasets/test-rolling-update-deployment-67cf4f6444 32873e84-fbaa-4b33-a5ee-27dd0644d871 10231536 1 2020-04-22 21:56:35 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 42dfa09c-299c-47ef-8cc8-4cf804b608b1 0xc002dec497 0xc002dec498}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002dec508 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 22 21:56:41.931: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 22 21:56:41.931: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-6467 /apis/apps/v1/namespaces/deployment-6467/replicasets/test-rolling-update-controller c2a37edd-605a-4ce4-a6e6-c63a877a4a84 10231545 2 2020-04-22 21:56:30 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 42dfa09c-299c-47ef-8cc8-4cf804b608b1 0xc002dec3c7 0xc002dec3c8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002dec428 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 22 21:56:41.933: INFO: Pod "test-rolling-update-deployment-67cf4f6444-dkmwp" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-dkmwp test-rolling-update-deployment-67cf4f6444- deployment-6467 /api/v1/namespaces/deployment-6467/pods/test-rolling-update-deployment-67cf4f6444-dkmwp 20b05f15-2162-40af-ad54-8132ab75b304 10231535 0 2020-04-22 21:56:35 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 32873e84-fbaa-4b33-a5ee-27dd0644d871 0xc002f2d437 0xc002f2d438}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v9wqs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v9wqs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v9wqs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 21:56:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 21:56:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 21:56:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 21:56:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.100,StartTime:2020-04-22 21:56:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-22 21:56:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://bc04646355113348fb7183db04058029d0b3e5dd585516b66895e11d4b01aa3b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.100,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:56:41.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6467" for this suite. • [SLOW TEST:11.186 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":161,"skipped":2511,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:56:41.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-b842r in namespace proxy-2300 I0422 21:56:42.054015 6 runners.go:189] Created replication controller with name: proxy-service-b842r, namespace: proxy-2300, replica count: 1 I0422 21:56:43.104562 6 runners.go:189] proxy-service-b842r Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 21:56:44.104752 6 runners.go:189] proxy-service-b842r Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 21:56:45.104958 6 runners.go:189] proxy-service-b842r Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0422 21:56:46.105346 6 runners.go:189] proxy-service-b842r Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0422 21:56:47.105542 6 runners.go:189] proxy-service-b842r Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 22 21:56:47.514: INFO: setup took 5.491484342s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Apr 22 21:56:48.263: INFO: (0) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r:1080/proxy/: test<... (200; 748.751707ms) Apr 22 21:56:48.264: INFO: (0) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r:162/proxy/: bar (200; 749.644478ms) Apr 22 21:56:48.264: INFO: (0) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:1080/proxy/: ... (200; 749.934059ms) Apr 22 21:56:48.264: INFO: (0) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:162/proxy/: bar (200; 749.761553ms) Apr 22 21:56:48.264: INFO: (0) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r:160/proxy/: foo (200; 749.846069ms) Apr 22 21:56:48.264: INFO: (0) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r/proxy/: test (200; 749.862552ms) Apr 22 21:56:48.264: INFO: (0) /api/v1/namespaces/proxy-2300/services/http:proxy-service-b842r:portname2/proxy/: bar (200; 750.276313ms) Apr 22 21:56:48.265: INFO: (0) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:160/proxy/: foo (200; 751.247934ms) Apr 22 21:56:48.268: INFO: (0) /api/v1/namespaces/proxy-2300/services/proxy-service-b842r:portname1/proxy/: foo (200; 754.04874ms) Apr 22 21:56:48.268: INFO: (0) /api/v1/namespaces/proxy-2300/services/http:proxy-service-b842r:portname1/proxy/: foo (200; 754.423253ms) Apr 22 21:56:48.268: INFO: (0) /api/v1/namespaces/proxy-2300/services/proxy-service-b842r:portname2/proxy/: bar (200; 754.262036ms) Apr 22 21:56:48.275: INFO: (0) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:462/proxy/: tls qux (200; 760.605926ms) Apr 22 21:56:48.275: INFO: (0) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:460/proxy/: tls baz (200; 760.482458ms) Apr 22 21:56:48.275: INFO: (0) /api/v1/namespaces/proxy-2300/services/https:proxy-service-b842r:tlsportname2/proxy/: tls qux (200; 760.587976ms) Apr 22 21:56:48.275: INFO: (0) /api/v1/namespaces/proxy-2300/services/https:proxy-service-b842r:tlsportname1/proxy/: tls baz (200; 760.634699ms) Apr 22 21:56:48.275: INFO: (0) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:443/proxy/: ... (200; 7.421915ms) Apr 22 21:56:48.285: INFO: (1) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r:160/proxy/: foo (200; 8.927802ms) Apr 22 21:56:48.285: INFO: (1) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:160/proxy/: foo (200; 9.555972ms) Apr 22 21:56:48.285: INFO: (1) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r:162/proxy/: bar (200; 9.629513ms) Apr 22 21:56:48.285: INFO: (1) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r:1080/proxy/: test<... (200; 9.533343ms) Apr 22 21:56:48.285: INFO: (1) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r/proxy/: test (200; 9.530386ms) Apr 22 21:56:48.285: INFO: (1) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:462/proxy/: tls qux (200; 9.796983ms) Apr 22 21:56:48.285: INFO: (1) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:162/proxy/: bar (200; 9.752921ms) Apr 22 21:56:48.285: INFO: (1) /api/v1/namespaces/proxy-2300/services/http:proxy-service-b842r:portname2/proxy/: bar (200; 9.771648ms) Apr 22 21:56:48.285: INFO: (1) /api/v1/namespaces/proxy-2300/services/https:proxy-service-b842r:tlsportname2/proxy/: tls qux (200; 9.74954ms) Apr 22 21:56:48.285: INFO: (1) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:443/proxy/: ... (200; 16.547434ms) Apr 22 21:56:48.327: INFO: (2) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r:162/proxy/: bar (200; 16.504764ms) Apr 22 21:56:48.327: INFO: (2) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:162/proxy/: bar (200; 16.514228ms) Apr 22 21:56:48.327: INFO: (2) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r/proxy/: test (200; 16.579575ms) Apr 22 21:56:48.327: INFO: (2) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r:1080/proxy/: test<... (200; 16.553997ms) Apr 22 21:56:48.328: INFO: (2) /api/v1/namespaces/proxy-2300/services/http:proxy-service-b842r:portname2/proxy/: bar (200; 17.61654ms) Apr 22 21:56:48.329: INFO: (2) /api/v1/namespaces/proxy-2300/services/https:proxy-service-b842r:tlsportname2/proxy/: tls qux (200; 18.056525ms) Apr 22 21:56:48.329: INFO: (2) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r:160/proxy/: foo (200; 18.108269ms) Apr 22 21:56:48.329: INFO: (2) /api/v1/namespaces/proxy-2300/services/proxy-service-b842r:portname2/proxy/: bar (200; 17.970987ms) Apr 22 21:56:48.329: INFO: (2) /api/v1/namespaces/proxy-2300/services/proxy-service-b842r:portname1/proxy/: foo (200; 18.07903ms) Apr 22 21:56:48.329: INFO: (2) /api/v1/namespaces/proxy-2300/services/https:proxy-service-b842r:tlsportname1/proxy/: tls baz (200; 18.128681ms) Apr 22 21:56:48.329: INFO: (2) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:443/proxy/: test<... (200; 11.10358ms) Apr 22 21:56:48.341: INFO: (3) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:162/proxy/: bar (200; 11.031539ms) Apr 22 21:56:48.341: INFO: (3) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:443/proxy/: test (200; 11.194494ms) Apr 22 21:56:48.341: INFO: (3) /api/v1/namespaces/proxy-2300/services/proxy-service-b842r:portname2/proxy/: bar (200; 11.191168ms) Apr 22 21:56:48.341: INFO: (3) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:462/proxy/: tls qux (200; 11.098708ms) Apr 22 21:56:48.341: INFO: (3) /api/v1/namespaces/proxy-2300/services/https:proxy-service-b842r:tlsportname2/proxy/: tls qux (200; 11.20216ms) Apr 22 21:56:48.341: INFO: (3) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r:160/proxy/: foo (200; 11.283708ms) Apr 22 21:56:48.341: INFO: (3) /api/v1/namespaces/proxy-2300/services/https:proxy-service-b842r:tlsportname1/proxy/: tls baz (200; 11.220976ms) Apr 22 21:56:48.341: INFO: (3) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:160/proxy/: foo (200; 11.387728ms) Apr 22 21:56:48.341: INFO: (3) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:1080/proxy/: ... (200; 11.40161ms) Apr 22 21:56:48.345: INFO: (4) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:1080/proxy/: ... (200; 4.159193ms) Apr 22 21:56:48.345: INFO: (4) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r:160/proxy/: foo (200; 4.357684ms) Apr 22 21:56:48.345: INFO: (4) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r/proxy/: test (200; 4.336704ms) Apr 22 21:56:48.345: INFO: (4) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r:1080/proxy/: test<... (200; 4.349536ms) Apr 22 21:56:48.345: INFO: (4) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:160/proxy/: foo (200; 4.40086ms) Apr 22 21:56:48.345: INFO: (4) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:443/proxy/: test<... (200; 8.712279ms) Apr 22 21:56:48.356: INFO: (5) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r:162/proxy/: bar (200; 8.621417ms) Apr 22 21:56:48.358: INFO: (5) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:162/proxy/: bar (200; 10.762759ms) Apr 22 21:56:48.358: INFO: (5) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:1080/proxy/: ... (200; 10.787603ms) Apr 22 21:56:48.358: INFO: (5) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r:160/proxy/: foo (200; 11.074995ms) Apr 22 21:56:48.358: INFO: (5) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r/proxy/: test (200; 11.214838ms) Apr 22 21:56:48.359: INFO: (5) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:460/proxy/: tls baz (200; 11.3405ms) Apr 22 21:56:48.359: INFO: (5) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:160/proxy/: foo (200; 11.378969ms) Apr 22 21:56:48.359: INFO: (5) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:443/proxy/: test<... (200; 3.10266ms) Apr 22 21:56:48.363: INFO: (6) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r:160/proxy/: foo (200; 3.179733ms) Apr 22 21:56:48.393: INFO: (6) /api/v1/namespaces/proxy-2300/services/http:proxy-service-b842r:portname2/proxy/: bar (200; 32.914962ms) Apr 22 21:56:48.393: INFO: (6) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:160/proxy/: foo (200; 33.158106ms) Apr 22 21:56:48.393: INFO: (6) /api/v1/namespaces/proxy-2300/services/https:proxy-service-b842r:tlsportname1/proxy/: tls baz (200; 33.20686ms) Apr 22 21:56:48.394: INFO: (6) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r:162/proxy/: bar (200; 33.539374ms) Apr 22 21:56:48.394: INFO: (6) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:162/proxy/: bar (200; 33.593692ms) Apr 22 21:56:48.394: INFO: (6) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:462/proxy/: tls qux (200; 33.601316ms) Apr 22 21:56:48.394: INFO: (6) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:1080/proxy/: ... (200; 33.720205ms) Apr 22 21:56:48.394: INFO: (6) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:460/proxy/: tls baz (200; 33.837075ms) Apr 22 21:56:48.394: INFO: (6) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r/proxy/: test (200; 34.206814ms) Apr 22 21:56:48.394: INFO: (6) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:443/proxy/: ... (200; 3.862214ms) Apr 22 21:56:48.401: INFO: (7) /api/v1/namespaces/proxy-2300/services/https:proxy-service-b842r:tlsportname1/proxy/: tls baz (200; 6.071297ms) Apr 22 21:56:48.401: INFO: (7) /api/v1/namespaces/proxy-2300/services/proxy-service-b842r:portname1/proxy/: foo (200; 6.089097ms) Apr 22 21:56:48.401: INFO: (7) /api/v1/namespaces/proxy-2300/services/http:proxy-service-b842r:portname2/proxy/: bar (200; 6.131987ms) Apr 22 21:56:48.402: INFO: (7) /api/v1/namespaces/proxy-2300/services/https:proxy-service-b842r:tlsportname2/proxy/: tls qux (200; 6.237342ms) Apr 22 21:56:48.402: INFO: (7) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r:162/proxy/: bar (200; 6.861558ms) Apr 22 21:56:48.403: INFO: (7) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:162/proxy/: bar (200; 7.595939ms) Apr 22 21:56:48.403: INFO: (7) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r:160/proxy/: foo (200; 8.054035ms) Apr 22 21:56:48.403: INFO: (7) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r/proxy/: test (200; 8.089301ms) Apr 22 21:56:48.403: INFO: (7) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:443/proxy/: test<... (200; 8.202464ms) Apr 22 21:56:48.403: INFO: (7) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:462/proxy/: tls qux (200; 8.134604ms) Apr 22 21:56:48.403: INFO: (7) /api/v1/namespaces/proxy-2300/services/http:proxy-service-b842r:portname1/proxy/: foo (200; 8.261049ms) Apr 22 21:56:48.403: INFO: (7) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:460/proxy/: tls baz (200; 8.212944ms) Apr 22 21:56:48.411: INFO: (8) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:443/proxy/: test<... (200; 8.170783ms) Apr 22 21:56:48.412: INFO: (8) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r/proxy/: test (200; 8.420834ms) Apr 22 21:56:48.412: INFO: (8) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:162/proxy/: bar (200; 8.387841ms) Apr 22 21:56:48.412: INFO: (8) /api/v1/namespaces/proxy-2300/services/https:proxy-service-b842r:tlsportname2/proxy/: tls qux (200; 8.470268ms) Apr 22 21:56:48.412: INFO: (8) /api/v1/namespaces/proxy-2300/services/http:proxy-service-b842r:portname1/proxy/: foo (200; 8.664605ms) Apr 22 21:56:48.413: INFO: (8) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r:160/proxy/: foo (200; 8.528139ms) Apr 22 21:56:48.413: INFO: (8) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:160/proxy/: foo (200; 9.601989ms) Apr 22 21:56:48.413: INFO: (8) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r:162/proxy/: bar (200; 9.355487ms) Apr 22 21:56:48.413: INFO: (8) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:1080/proxy/: ... (200; 9.651954ms) Apr 22 21:56:48.413: INFO: (8) /api/v1/namespaces/proxy-2300/services/http:proxy-service-b842r:portname2/proxy/: bar (200; 9.605583ms) Apr 22 21:56:48.413: INFO: (8) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:460/proxy/: tls baz (200; 9.681859ms) Apr 22 21:56:48.413: INFO: (8) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:462/proxy/: tls qux (200; 9.736539ms) Apr 22 21:56:48.417: INFO: (9) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:1080/proxy/: ... (200; 3.49989ms) Apr 22 21:56:48.418: INFO: (9) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r:162/proxy/: bar (200; 4.086065ms) Apr 22 21:56:48.418: INFO: (9) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r/proxy/: test (200; 4.128868ms) Apr 22 21:56:48.418: INFO: (9) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:460/proxy/: tls baz (200; 4.241878ms) Apr 22 21:56:48.418: INFO: (9) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r:1080/proxy/: test<... (200; 4.175107ms) Apr 22 21:56:48.418: INFO: (9) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:443/proxy/: test<... (200; 5.292711ms) Apr 22 21:56:48.425: INFO: (10) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:460/proxy/: tls baz (200; 5.470968ms) Apr 22 21:56:48.425: INFO: (10) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:1080/proxy/: ... (200; 5.599688ms) Apr 22 21:56:48.425: INFO: (10) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r:162/proxy/: bar (200; 5.484768ms) Apr 22 21:56:48.425: INFO: (10) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:162/proxy/: bar (200; 5.500081ms) Apr 22 21:56:48.425: INFO: (10) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r/proxy/: test (200; 5.652498ms) Apr 22 21:56:48.425: INFO: (10) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r:160/proxy/: foo (200; 5.784686ms) Apr 22 21:56:48.425: INFO: (10) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:462/proxy/: tls qux (200; 6.155472ms) Apr 22 21:56:48.426: INFO: (10) /api/v1/namespaces/proxy-2300/services/http:proxy-service-b842r:portname1/proxy/: foo (200; 7.416001ms) Apr 22 21:56:48.426: INFO: (10) /api/v1/namespaces/proxy-2300/services/proxy-service-b842r:portname2/proxy/: bar (200; 7.372048ms) Apr 22 21:56:48.427: INFO: (10) /api/v1/namespaces/proxy-2300/services/https:proxy-service-b842r:tlsportname2/proxy/: tls qux (200; 7.372473ms) Apr 22 21:56:48.427: INFO: (10) /api/v1/namespaces/proxy-2300/services/https:proxy-service-b842r:tlsportname1/proxy/: tls baz (200; 8.108079ms) Apr 22 21:56:48.427: INFO: (10) /api/v1/namespaces/proxy-2300/services/http:proxy-service-b842r:portname2/proxy/: bar (200; 8.051732ms) Apr 22 21:56:48.428: INFO: (10) /api/v1/namespaces/proxy-2300/services/proxy-service-b842r:portname1/proxy/: foo (200; 8.357543ms) Apr 22 21:56:48.432: INFO: (11) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:162/proxy/: bar (200; 4.405579ms) Apr 22 21:56:48.433: INFO: (11) /api/v1/namespaces/proxy-2300/services/http:proxy-service-b842r:portname1/proxy/: foo (200; 5.631738ms) Apr 22 21:56:48.433: INFO: (11) /api/v1/namespaces/proxy-2300/services/proxy-service-b842r:portname2/proxy/: bar (200; 5.750814ms) Apr 22 21:56:48.434: INFO: (11) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r:1080/proxy/: test<... (200; 5.928618ms) Apr 22 21:56:48.434: INFO: (11) /api/v1/namespaces/proxy-2300/services/proxy-service-b842r:portname1/proxy/: foo (200; 6.012026ms) Apr 22 21:56:48.434: INFO: (11) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:160/proxy/: foo (200; 6.381083ms) Apr 22 21:56:48.434: INFO: (11) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r:160/proxy/: foo (200; 6.346211ms) Apr 22 21:56:48.434: INFO: (11) /api/v1/namespaces/proxy-2300/services/http:proxy-service-b842r:portname2/proxy/: bar (200; 6.369019ms) Apr 22 21:56:48.434: INFO: (11) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:1080/proxy/: ... (200; 6.473874ms) Apr 22 21:56:48.434: INFO: (11) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r:162/proxy/: bar (200; 6.366634ms) Apr 22 21:56:48.434: INFO: (11) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:460/proxy/: tls baz (200; 6.389929ms) Apr 22 21:56:48.434: INFO: (11) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r/proxy/: test (200; 6.509504ms) Apr 22 21:56:48.434: INFO: (11) /api/v1/namespaces/proxy-2300/services/https:proxy-service-b842r:tlsportname2/proxy/: tls qux (200; 6.648692ms) Apr 22 21:56:48.434: INFO: (11) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:443/proxy/: test<... (200; 2.968467ms) Apr 22 21:56:48.438: INFO: (12) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:162/proxy/: bar (200; 3.068361ms) Apr 22 21:56:48.438: INFO: (12) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r/proxy/: test (200; 3.202804ms) Apr 22 21:56:48.438: INFO: (12) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r:160/proxy/: foo (200; 3.164404ms) Apr 22 21:56:48.438: INFO: (12) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:160/proxy/: foo (200; 3.296046ms) Apr 22 21:56:48.439: INFO: (12) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:462/proxy/: tls qux (200; 4.513438ms) Apr 22 21:56:48.439: INFO: (12) /api/v1/namespaces/proxy-2300/services/proxy-service-b842r:portname2/proxy/: bar (200; 4.506627ms) Apr 22 21:56:48.439: INFO: (12) /api/v1/namespaces/proxy-2300/services/http:proxy-service-b842r:portname2/proxy/: bar (200; 4.552779ms) Apr 22 21:56:48.439: INFO: (12) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:443/proxy/: ... (200; 4.642082ms) Apr 22 21:56:48.439: INFO: (12) /api/v1/namespaces/proxy-2300/services/proxy-service-b842r:portname1/proxy/: foo (200; 4.651657ms) Apr 22 21:56:48.439: INFO: (12) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:460/proxy/: tls baz (200; 4.811903ms) Apr 22 21:56:48.439: INFO: (12) /api/v1/namespaces/proxy-2300/services/https:proxy-service-b842r:tlsportname1/proxy/: tls baz (200; 4.738956ms) Apr 22 21:56:48.439: INFO: (12) /api/v1/namespaces/proxy-2300/services/https:proxy-service-b842r:tlsportname2/proxy/: tls qux (200; 4.900896ms) Apr 22 21:56:48.444: INFO: (13) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r:1080/proxy/: test<... (200; 4.897465ms) Apr 22 21:56:48.445: INFO: (13) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:460/proxy/: tls baz (200; 5.25875ms) Apr 22 21:56:48.445: INFO: (13) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r:162/proxy/: bar (200; 5.435258ms) Apr 22 21:56:48.445: INFO: (13) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r/proxy/: test (200; 5.434303ms) Apr 22 21:56:48.445: INFO: (13) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:1080/proxy/: ... (200; 5.478797ms) Apr 22 21:56:48.445: INFO: (13) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:443/proxy/: ... (200; 4.049051ms) Apr 22 21:56:48.459: INFO: (14) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r:1080/proxy/: test<... (200; 4.132175ms) Apr 22 21:56:48.459: INFO: (14) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:160/proxy/: foo (200; 4.170893ms) Apr 22 21:56:48.459: INFO: (14) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:462/proxy/: tls qux (200; 4.320147ms) Apr 22 21:56:48.459: INFO: (14) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:460/proxy/: tls baz (200; 4.398918ms) Apr 22 21:56:48.459: INFO: (14) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r:162/proxy/: bar (200; 4.364344ms) Apr 22 21:56:48.459: INFO: (14) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r/proxy/: test (200; 4.419926ms) Apr 22 21:56:48.460: INFO: (14) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:443/proxy/: test (200; 5.891675ms) Apr 22 21:56:48.467: INFO: (15) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r:160/proxy/: foo (200; 5.91184ms) Apr 22 21:56:48.467: INFO: (15) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r:1080/proxy/: test<... (200; 6.103843ms) Apr 22 21:56:48.467: INFO: (15) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:462/proxy/: tls qux (200; 6.293869ms) Apr 22 21:56:48.467: INFO: (15) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:162/proxy/: bar (200; 6.190282ms) Apr 22 21:56:48.467: INFO: (15) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:1080/proxy/: ... (200; 6.261708ms) Apr 22 21:56:48.467: INFO: (15) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:160/proxy/: foo (200; 6.304859ms) Apr 22 21:56:48.467: INFO: (15) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:460/proxy/: tls baz (200; 6.243406ms) Apr 22 21:56:48.467: INFO: (15) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:443/proxy/: test (200; 3.292687ms) Apr 22 21:56:48.471: INFO: (16) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:1080/proxy/: ... (200; 3.311152ms) Apr 22 21:56:48.471: INFO: (16) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r:162/proxy/: bar (200; 3.318657ms) Apr 22 21:56:48.472: INFO: (16) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:162/proxy/: bar (200; 3.677023ms) Apr 22 21:56:48.472: INFO: (16) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:460/proxy/: tls baz (200; 3.880354ms) Apr 22 21:56:48.472: INFO: (16) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:160/proxy/: foo (200; 3.904538ms) Apr 22 21:56:48.472: INFO: (16) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:462/proxy/: tls qux (200; 4.072945ms) Apr 22 21:56:48.472: INFO: (16) /api/v1/namespaces/proxy-2300/services/proxy-service-b842r:portname2/proxy/: bar (200; 4.002307ms) Apr 22 21:56:48.472: INFO: (16) /api/v1/namespaces/proxy-2300/services/https:proxy-service-b842r:tlsportname1/proxy/: tls baz (200; 3.987326ms) Apr 22 21:56:48.472: INFO: (16) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r:160/proxy/: foo (200; 3.984384ms) Apr 22 21:56:48.472: INFO: (16) /api/v1/namespaces/proxy-2300/services/https:proxy-service-b842r:tlsportname2/proxy/: tls qux (200; 4.076597ms) Apr 22 21:56:48.472: INFO: (16) /api/v1/namespaces/proxy-2300/services/proxy-service-b842r:portname1/proxy/: foo (200; 4.062157ms) Apr 22 21:56:48.472: INFO: (16) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:443/proxy/: test<... (200; 4.136138ms) Apr 22 21:56:48.473: INFO: (16) /api/v1/namespaces/proxy-2300/services/http:proxy-service-b842r:portname2/proxy/: bar (200; 4.727462ms) Apr 22 21:56:48.481: INFO: (17) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r:162/proxy/: bar (200; 7.936239ms) Apr 22 21:56:48.481: INFO: (17) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r:1080/proxy/: test<... (200; 8.007518ms) Apr 22 21:56:48.481: INFO: (17) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:462/proxy/: tls qux (200; 8.058043ms) Apr 22 21:56:48.481: INFO: (17) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:160/proxy/: foo (200; 8.349005ms) Apr 22 21:56:48.481: INFO: (17) /api/v1/namespaces/proxy-2300/services/proxy-service-b842r:portname2/proxy/: bar (200; 8.57628ms) Apr 22 21:56:48.482: INFO: (17) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r/proxy/: test (200; 8.840009ms) Apr 22 21:56:48.482: INFO: (17) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:1080/proxy/: ... (200; 8.934368ms) Apr 22 21:56:48.482: INFO: (17) /api/v1/namespaces/proxy-2300/services/http:proxy-service-b842r:portname1/proxy/: foo (200; 9.016499ms) Apr 22 21:56:48.482: INFO: (17) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:162/proxy/: bar (200; 9.016088ms) Apr 22 21:56:48.482: INFO: (17) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r:160/proxy/: foo (200; 9.018428ms) Apr 22 21:56:48.482: INFO: (17) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:443/proxy/: test<... (200; 6.133443ms) Apr 22 21:56:48.492: INFO: (18) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:160/proxy/: foo (200; 6.420503ms) Apr 22 21:56:48.492: INFO: (18) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:1080/proxy/: ... (200; 6.507942ms) Apr 22 21:56:48.513: INFO: (18) /api/v1/namespaces/proxy-2300/services/http:proxy-service-b842r:portname2/proxy/: bar (200; 26.848787ms) Apr 22 21:56:48.513: INFO: (18) /api/v1/namespaces/proxy-2300/services/proxy-service-b842r:portname2/proxy/: bar (200; 27.404344ms) Apr 22 21:56:48.513: INFO: (18) /api/v1/namespaces/proxy-2300/services/http:proxy-service-b842r:portname1/proxy/: foo (200; 27.412591ms) Apr 22 21:56:48.514: INFO: (18) /api/v1/namespaces/proxy-2300/pods/http:proxy-service-b842r-k7j6r:162/proxy/: bar (200; 27.33115ms) Apr 22 21:56:48.514: INFO: (18) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r:162/proxy/: bar (200; 27.364346ms) Apr 22 21:56:48.514: INFO: (18) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:462/proxy/: tls qux (200; 27.397272ms) Apr 22 21:56:48.514: INFO: (18) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r/proxy/: test (200; 27.393611ms) Apr 22 21:56:48.514: INFO: (18) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:460/proxy/: tls baz (200; 27.577604ms) Apr 22 21:56:48.514: INFO: (18) /api/v1/namespaces/proxy-2300/pods/https:proxy-service-b842r-k7j6r:443/proxy/: ... (200; 4.852807ms) Apr 22 21:56:48.546: INFO: (19) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r:1080/proxy/: test<... (200; 4.957627ms) Apr 22 21:56:48.546: INFO: (19) /api/v1/namespaces/proxy-2300/pods/proxy-service-b842r-k7j6r/proxy/: test (200; 5.296188ms) Apr 22 21:56:48.546: INFO: (19) /api/v1/namespaces/proxy-2300/services/proxy-service-b842r:portname2/proxy/: bar (200; 5.223548ms) Apr 22 21:56:48.546: INFO: (19) /api/v1/namespaces/proxy-2300/services/proxy-service-b842r:portname1/proxy/: foo (200; 5.146722ms) Apr 22 21:56:48.546: INFO: (19) /api/v1/namespaces/proxy-2300/services/https:proxy-service-b842r:tlsportname2/proxy/: tls qux (200; 5.381896ms) Apr 22 21:56:48.546: INFO: (19) /api/v1/namespaces/proxy-2300/services/http:proxy-service-b842r:portname2/proxy/: bar (200; 5.553337ms) Apr 22 21:56:48.547: INFO: (19) /api/v1/namespaces/proxy-2300/services/https:proxy-service-b842r:tlsportname1/proxy/: tls baz (200; 5.916959ms) Apr 22 21:56:48.547: INFO: (19) /api/v1/namespaces/proxy-2300/services/http:proxy-service-b842r:portname1/proxy/: foo (200; 5.766043ms) STEP: deleting ReplicationController proxy-service-b842r in namespace proxy-2300, will wait for the garbage collector to delete the pods Apr 22 21:56:48.766: INFO: Deleting ReplicationController proxy-service-b842r took: 164.795312ms Apr 22 21:56:48.867: INFO: Terminating ReplicationController proxy-service-b842r pods took: 100.213088ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:56:59.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-2300" for this suite. • [SLOW TEST:17.636 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":278,"completed":162,"skipped":2522,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:56:59.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-403f7476-24eb-41f1-acee-22e6680a9970 STEP: Creating a pod to test consume secrets Apr 22 21:56:59.725: INFO: Waiting up to 5m0s for pod "pod-secrets-82009b2c-917f-4b6b-a360-0b00e83d1d8c" in namespace "secrets-7522" to be "success or failure" Apr 22 21:56:59.729: INFO: Pod "pod-secrets-82009b2c-917f-4b6b-a360-0b00e83d1d8c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.765839ms Apr 22 21:57:01.734: INFO: Pod "pod-secrets-82009b2c-917f-4b6b-a360-0b00e83d1d8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008129739s Apr 22 21:57:03.742: INFO: Pod "pod-secrets-82009b2c-917f-4b6b-a360-0b00e83d1d8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01644215s STEP: Saw pod success Apr 22 21:57:03.742: INFO: Pod "pod-secrets-82009b2c-917f-4b6b-a360-0b00e83d1d8c" satisfied condition "success or failure" Apr 22 21:57:03.745: INFO: Trying to get logs from node jerma-worker pod pod-secrets-82009b2c-917f-4b6b-a360-0b00e83d1d8c container secret-volume-test: STEP: delete the pod Apr 22 21:57:03.759: INFO: Waiting for pod pod-secrets-82009b2c-917f-4b6b-a360-0b00e83d1d8c to disappear Apr 22 21:57:03.777: INFO: Pod pod-secrets-82009b2c-917f-4b6b-a360-0b00e83d1d8c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:57:03.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7522" for this suite. STEP: Destroying namespace "secret-namespace-9828" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2525,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:57:03.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 22 21:57:03.885: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:57:10.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3822" for this suite. • [SLOW TEST:6.658 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":164,"skipped":2547,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:57:10.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 22 21:57:10.536: INFO: Waiting up to 5m0s for pod "downwardapi-volume-964d7e43-166d-4a4b-851e-25f70706ed98" in namespace "downward-api-347" to be "success or failure" Apr 22 21:57:10.565: INFO: Pod "downwardapi-volume-964d7e43-166d-4a4b-851e-25f70706ed98": Phase="Pending", Reason="", readiness=false. Elapsed: 29.45737ms Apr 22 21:57:12.569: INFO: Pod "downwardapi-volume-964d7e43-166d-4a4b-851e-25f70706ed98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033092379s Apr 22 21:57:14.573: INFO: Pod "downwardapi-volume-964d7e43-166d-4a4b-851e-25f70706ed98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037218489s STEP: Saw pod success Apr 22 21:57:14.573: INFO: Pod "downwardapi-volume-964d7e43-166d-4a4b-851e-25f70706ed98" satisfied condition "success or failure" Apr 22 21:57:14.576: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-964d7e43-166d-4a4b-851e-25f70706ed98 container client-container: STEP: delete the pod Apr 22 21:57:14.608: INFO: Waiting for pod downwardapi-volume-964d7e43-166d-4a4b-851e-25f70706ed98 to disappear Apr 22 21:57:14.645: INFO: Pod downwardapi-volume-964d7e43-166d-4a4b-851e-25f70706ed98 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:57:14.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-347" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":165,"skipped":2550,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:57:14.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 22 21:57:15.333: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 22 21:57:17.345: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723189435, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723189435, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723189435, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723189435, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 22 21:57:20.393: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Apr 22 21:57:24.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-3939 to-be-attached-pod -i -c=container1' Apr 22 21:57:26.887: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:57:26.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3939" for this suite. STEP: Destroying namespace "webhook-3939-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.344 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":166,"skipped":2576,"failed":0} SSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:57:26.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-f79ddfbc-ea99-4f5d-ad2d-ad5795a2662a in namespace container-probe-2647 Apr 22 21:57:33.085: INFO: Started pod liveness-f79ddfbc-ea99-4f5d-ad2d-ad5795a2662a in namespace container-probe-2647 STEP: checking the pod's current state and verifying that restartCount is present Apr 22 21:57:33.088: INFO: Initial restart count of pod liveness-f79ddfbc-ea99-4f5d-ad2d-ad5795a2662a is 0 Apr 22 21:57:57.138: INFO: Restart count of pod container-probe-2647/liveness-f79ddfbc-ea99-4f5d-ad2d-ad5795a2662a is now 1 (24.050858179s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:57:57.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2647" for this suite. • [SLOW TEST:30.205 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":167,"skipped":2581,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:57:57.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 22 21:57:57.266: INFO: Waiting up to 5m0s for pod "pod-06c8dc8f-e384-498d-a757-c54c427a39e2" in namespace "emptydir-2228" to be "success or failure" Apr 22 21:57:57.538: INFO: Pod "pod-06c8dc8f-e384-498d-a757-c54c427a39e2": Phase="Pending", Reason="", readiness=false. Elapsed: 271.54508ms Apr 22 21:57:59.543: INFO: Pod "pod-06c8dc8f-e384-498d-a757-c54c427a39e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.276583301s Apr 22 21:58:01.547: INFO: Pod "pod-06c8dc8f-e384-498d-a757-c54c427a39e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.280542994s STEP: Saw pod success Apr 22 21:58:01.547: INFO: Pod "pod-06c8dc8f-e384-498d-a757-c54c427a39e2" satisfied condition "success or failure" Apr 22 21:58:01.549: INFO: Trying to get logs from node jerma-worker2 pod pod-06c8dc8f-e384-498d-a757-c54c427a39e2 container test-container: STEP: delete the pod Apr 22 21:58:01.617: INFO: Waiting for pod pod-06c8dc8f-e384-498d-a757-c54c427a39e2 to disappear Apr 22 21:58:01.623: INFO: Pod pod-06c8dc8f-e384-498d-a757-c54c427a39e2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:58:01.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2228" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":168,"skipped":2632,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:58:01.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-658 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-658 STEP: Creating statefulset with conflicting port in namespace statefulset-658 STEP: Waiting until pod test-pod will start running in namespace statefulset-658 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-658 Apr 22 21:58:07.786: INFO: Observed stateful pod in namespace: statefulset-658, name: ss-0, uid: 03178f54-3c40-4cb3-b60d-dd479c8f8314, status phase: Pending. Waiting for statefulset controller to delete. Apr 22 21:58:08.112: INFO: Observed stateful pod in namespace: statefulset-658, name: ss-0, uid: 03178f54-3c40-4cb3-b60d-dd479c8f8314, status phase: Failed. Waiting for statefulset controller to delete. Apr 22 21:58:08.161: INFO: Observed stateful pod in namespace: statefulset-658, name: ss-0, uid: 03178f54-3c40-4cb3-b60d-dd479c8f8314, status phase: Failed. Waiting for statefulset controller to delete. Apr 22 21:58:08.169: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-658 STEP: Removing pod with conflicting port in namespace statefulset-658 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-658 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 22 21:58:12.236: INFO: Deleting all statefulset in ns statefulset-658 Apr 22 21:58:12.240: INFO: Scaling statefulset ss to 0 Apr 22 21:58:22.256: INFO: Waiting for statefulset status.replicas updated to 0 Apr 22 21:58:22.260: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:58:22.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-658" for this suite. • [SLOW TEST:20.653 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":169,"skipped":2695,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:58:22.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command Apr 22 21:58:22.353: INFO: Waiting up to 5m0s for pod "var-expansion-136f44f9-4ff2-496a-8d7b-7ccd9c68749c" in namespace "var-expansion-172" to be "success or failure" Apr 22 21:58:22.356: INFO: Pod "var-expansion-136f44f9-4ff2-496a-8d7b-7ccd9c68749c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.079789ms Apr 22 21:58:24.360: INFO: Pod "var-expansion-136f44f9-4ff2-496a-8d7b-7ccd9c68749c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006513435s Apr 22 21:58:26.364: INFO: Pod "var-expansion-136f44f9-4ff2-496a-8d7b-7ccd9c68749c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010544551s STEP: Saw pod success Apr 22 21:58:26.364: INFO: Pod "var-expansion-136f44f9-4ff2-496a-8d7b-7ccd9c68749c" satisfied condition "success or failure" Apr 22 21:58:26.366: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-136f44f9-4ff2-496a-8d7b-7ccd9c68749c container dapi-container: STEP: delete the pod Apr 22 21:58:26.385: INFO: Waiting for pod var-expansion-136f44f9-4ff2-496a-8d7b-7ccd9c68749c to disappear Apr 22 21:58:26.388: INFO: Pod var-expansion-136f44f9-4ff2-496a-8d7b-7ccd9c68749c no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:58:26.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-172" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":170,"skipped":2704,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:58:26.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:58:26.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6279" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":171,"skipped":2712,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:58:26.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 22 21:58:26.585: INFO: Waiting up to 5m0s for pod "pod-58a25779-676e-4b52-8d1e-d4118c12b810" in namespace "emptydir-307" to be "success or failure" Apr 22 21:58:26.596: INFO: Pod "pod-58a25779-676e-4b52-8d1e-d4118c12b810": Phase="Pending", Reason="", readiness=false. Elapsed: 11.188171ms Apr 22 21:58:28.604: INFO: Pod "pod-58a25779-676e-4b52-8d1e-d4118c12b810": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019130441s Apr 22 21:58:30.608: INFO: Pod "pod-58a25779-676e-4b52-8d1e-d4118c12b810": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023040322s STEP: Saw pod success Apr 22 21:58:30.608: INFO: Pod "pod-58a25779-676e-4b52-8d1e-d4118c12b810" satisfied condition "success or failure" Apr 22 21:58:30.611: INFO: Trying to get logs from node jerma-worker2 pod pod-58a25779-676e-4b52-8d1e-d4118c12b810 container test-container: STEP: delete the pod Apr 22 21:58:30.647: INFO: Waiting for pod pod-58a25779-676e-4b52-8d1e-d4118c12b810 to disappear Apr 22 21:58:30.675: INFO: Pod pod-58a25779-676e-4b52-8d1e-d4118c12b810 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:58:30.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-307" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":172,"skipped":2795,"failed":0} SSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:58:30.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-2bf03941-8f7d-49db-9101-926ae794f48c in namespace container-probe-652 Apr 22 21:58:34.766: INFO: Started pod busybox-2bf03941-8f7d-49db-9101-926ae794f48c in namespace container-probe-652 STEP: checking the pod's current state and verifying that restartCount is present Apr 22 21:58:34.806: INFO: Initial restart count of pod busybox-2bf03941-8f7d-49db-9101-926ae794f48c is 0 Apr 22 21:59:25.219: INFO: Restart count of pod container-probe-652/busybox-2bf03941-8f7d-49db-9101-926ae794f48c is now 1 (50.41300681s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:59:25.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-652" for this suite. • [SLOW TEST:54.576 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":173,"skipped":2802,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:59:25.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-d4708e4d-1472-40e4-981a-88cd0e43ed90 STEP: Creating secret with name secret-projected-all-test-volume-d544705b-875f-4789-aefe-eb1a64380234 STEP: Creating a pod to test Check all projections for projected volume plugin Apr 22 21:59:25.368: INFO: Waiting up to 5m0s for pod "projected-volume-874d6cad-f32c-41d1-86de-7eb18e96f186" in namespace "projected-8983" to be "success or failure" Apr 22 21:59:25.403: INFO: Pod "projected-volume-874d6cad-f32c-41d1-86de-7eb18e96f186": Phase="Pending", Reason="", readiness=false. Elapsed: 35.012072ms Apr 22 21:59:27.471: INFO: Pod "projected-volume-874d6cad-f32c-41d1-86de-7eb18e96f186": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102612951s Apr 22 21:59:29.473: INFO: Pod "projected-volume-874d6cad-f32c-41d1-86de-7eb18e96f186": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.105191902s STEP: Saw pod success Apr 22 21:59:29.473: INFO: Pod "projected-volume-874d6cad-f32c-41d1-86de-7eb18e96f186" satisfied condition "success or failure" Apr 22 21:59:29.475: INFO: Trying to get logs from node jerma-worker2 pod projected-volume-874d6cad-f32c-41d1-86de-7eb18e96f186 container projected-all-volume-test: STEP: delete the pod Apr 22 21:59:29.605: INFO: Waiting for pod projected-volume-874d6cad-f32c-41d1-86de-7eb18e96f186 to disappear Apr 22 21:59:29.608: INFO: Pod projected-volume-874d6cad-f32c-41d1-86de-7eb18e96f186 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:59:29.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8983" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":174,"skipped":2811,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:59:29.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 22 21:59:29.687: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a1790fd3-829a-47e1-896c-91bb92d10ca2" in namespace "downward-api-5584" to be "success or failure" Apr 22 21:59:29.742: INFO: Pod "downwardapi-volume-a1790fd3-829a-47e1-896c-91bb92d10ca2": Phase="Pending", Reason="", readiness=false. Elapsed: 55.072633ms Apr 22 21:59:31.746: INFO: Pod "downwardapi-volume-a1790fd3-829a-47e1-896c-91bb92d10ca2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058820392s Apr 22 21:59:33.749: INFO: Pod "downwardapi-volume-a1790fd3-829a-47e1-896c-91bb92d10ca2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062135382s STEP: Saw pod success Apr 22 21:59:33.749: INFO: Pod "downwardapi-volume-a1790fd3-829a-47e1-896c-91bb92d10ca2" satisfied condition "success or failure" Apr 22 21:59:33.752: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-a1790fd3-829a-47e1-896c-91bb92d10ca2 container client-container: STEP: delete the pod Apr 22 21:59:33.833: INFO: Waiting for pod downwardapi-volume-a1790fd3-829a-47e1-896c-91bb92d10ca2 to disappear Apr 22 21:59:33.837: INFO: Pod downwardapi-volume-a1790fd3-829a-47e1-896c-91bb92d10ca2 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 21:59:33.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5584" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":175,"skipped":2814,"failed":0} SSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 21:59:33.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3377 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3377;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3377 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3377;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3377.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3377.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3377.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3377.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3377.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3377.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3377.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3377.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3377.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3377.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3377.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3377.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3377.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 170.234.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.234.170_udp@PTR;check="$$(dig +tcp +noall +answer +search 170.234.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.234.170_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3377 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3377;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3377 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3377;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3377.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3377.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3377.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3377.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3377.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3377.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3377.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3377.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3377.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3377.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3377.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3377.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3377.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 170.234.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.234.170_udp@PTR;check="$$(dig +tcp +noall +answer +search 170.234.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.234.170_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 22 21:59:40.289: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:40.292: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:40.294: INFO: Unable to read wheezy_udp@dns-test-service.dns-3377 from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:40.297: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3377 from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:40.301: INFO: Unable to read wheezy_udp@dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:40.305: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:40.308: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:40.312: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:40.337: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:40.343: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:40.347: INFO: Unable to read jessie_udp@dns-test-service.dns-3377 from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:40.349: INFO: Unable to read jessie_tcp@dns-test-service.dns-3377 from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:40.352: INFO: Unable to read jessie_udp@dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:40.354: INFO: Unable to read jessie_tcp@dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:40.356: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:40.358: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:40.372: INFO: Lookups using dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3377 wheezy_tcp@dns-test-service.dns-3377 wheezy_udp@dns-test-service.dns-3377.svc wheezy_tcp@dns-test-service.dns-3377.svc wheezy_udp@_http._tcp.dns-test-service.dns-3377.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3377.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3377 jessie_tcp@dns-test-service.dns-3377 jessie_udp@dns-test-service.dns-3377.svc jessie_tcp@dns-test-service.dns-3377.svc jessie_udp@_http._tcp.dns-test-service.dns-3377.svc jessie_tcp@_http._tcp.dns-test-service.dns-3377.svc] Apr 22 21:59:45.377: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:45.381: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:45.384: INFO: Unable to read wheezy_udp@dns-test-service.dns-3377 from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:45.388: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3377 from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:45.391: INFO: Unable to read wheezy_udp@dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:45.395: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:45.399: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:45.401: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:45.420: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:45.423: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:45.425: INFO: Unable to read jessie_udp@dns-test-service.dns-3377 from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:45.427: INFO: Unable to read jessie_tcp@dns-test-service.dns-3377 from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:45.429: INFO: Unable to read jessie_udp@dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:45.431: INFO: Unable to read jessie_tcp@dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:45.434: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:45.436: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:45.451: INFO: Lookups using dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3377 wheezy_tcp@dns-test-service.dns-3377 wheezy_udp@dns-test-service.dns-3377.svc wheezy_tcp@dns-test-service.dns-3377.svc wheezy_udp@_http._tcp.dns-test-service.dns-3377.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3377.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3377 jessie_tcp@dns-test-service.dns-3377 jessie_udp@dns-test-service.dns-3377.svc jessie_tcp@dns-test-service.dns-3377.svc jessie_udp@_http._tcp.dns-test-service.dns-3377.svc jessie_tcp@_http._tcp.dns-test-service.dns-3377.svc] Apr 22 21:59:50.392: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:50.395: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:50.431: INFO: Unable to read wheezy_udp@dns-test-service.dns-3377 from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:50.439: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3377 from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:50.442: INFO: Unable to read wheezy_udp@dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:50.444: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:50.447: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:50.449: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:50.470: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:50.472: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:50.475: INFO: Unable to read jessie_udp@dns-test-service.dns-3377 from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:50.477: INFO: Unable to read jessie_tcp@dns-test-service.dns-3377 from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:50.480: INFO: Unable to read jessie_udp@dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:50.482: INFO: Unable to read jessie_tcp@dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:50.484: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:50.487: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:50.503: INFO: Lookups using dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3377 wheezy_tcp@dns-test-service.dns-3377 wheezy_udp@dns-test-service.dns-3377.svc wheezy_tcp@dns-test-service.dns-3377.svc wheezy_udp@_http._tcp.dns-test-service.dns-3377.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3377.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3377 jessie_tcp@dns-test-service.dns-3377 jessie_udp@dns-test-service.dns-3377.svc jessie_tcp@dns-test-service.dns-3377.svc jessie_udp@_http._tcp.dns-test-service.dns-3377.svc jessie_tcp@_http._tcp.dns-test-service.dns-3377.svc] Apr 22 21:59:55.377: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:55.380: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:55.382: INFO: Unable to read wheezy_udp@dns-test-service.dns-3377 from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:55.385: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3377 from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:55.388: INFO: Unable to read wheezy_udp@dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:55.391: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:55.393: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:55.396: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:55.417: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:55.420: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:55.423: INFO: Unable to read jessie_udp@dns-test-service.dns-3377 from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:55.425: INFO: Unable to read jessie_tcp@dns-test-service.dns-3377 from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:55.428: INFO: Unable to read jessie_udp@dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:55.431: INFO: Unable to read jessie_tcp@dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:55.434: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:55.437: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 21:59:55.454: INFO: Lookups using dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3377 wheezy_tcp@dns-test-service.dns-3377 wheezy_udp@dns-test-service.dns-3377.svc wheezy_tcp@dns-test-service.dns-3377.svc wheezy_udp@_http._tcp.dns-test-service.dns-3377.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3377.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3377 jessie_tcp@dns-test-service.dns-3377 jessie_udp@dns-test-service.dns-3377.svc jessie_tcp@dns-test-service.dns-3377.svc jessie_udp@_http._tcp.dns-test-service.dns-3377.svc jessie_tcp@_http._tcp.dns-test-service.dns-3377.svc] Apr 22 22:00:00.377: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 22:00:00.380: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 22:00:00.384: INFO: Unable to read wheezy_udp@dns-test-service.dns-3377 from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 22:00:00.387: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3377 from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 22:00:00.392: INFO: Unable to read wheezy_udp@dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 22:00:00.419: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 22:00:00.422: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 22:00:00.424: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 22:00:00.445: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 22:00:00.447: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 22:00:00.450: INFO: Unable to read jessie_udp@dns-test-service.dns-3377 from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 22:00:00.498: INFO: Unable to read jessie_tcp@dns-test-service.dns-3377 from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 22:00:00.502: INFO: Unable to read jessie_udp@dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 22:00:00.505: INFO: Unable to read jessie_tcp@dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 22:00:00.508: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 22:00:00.510: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 22:00:00.527: INFO: Lookups using dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3377 wheezy_tcp@dns-test-service.dns-3377 wheezy_udp@dns-test-service.dns-3377.svc wheezy_tcp@dns-test-service.dns-3377.svc wheezy_udp@_http._tcp.dns-test-service.dns-3377.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3377.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3377 jessie_tcp@dns-test-service.dns-3377 jessie_udp@dns-test-service.dns-3377.svc jessie_tcp@dns-test-service.dns-3377.svc jessie_udp@_http._tcp.dns-test-service.dns-3377.svc jessie_tcp@_http._tcp.dns-test-service.dns-3377.svc] Apr 22 22:00:05.378: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 22:00:05.382: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 22:00:05.385: INFO: Unable to read wheezy_udp@dns-test-service.dns-3377 from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 22:00:05.388: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3377 from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 22:00:05.390: INFO: Unable to read wheezy_udp@dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 22:00:05.393: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 22:00:05.395: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 22:00:05.398: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 22:00:05.417: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 22:00:05.420: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 22:00:05.422: INFO: Unable to read jessie_udp@dns-test-service.dns-3377 from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 22:00:05.425: INFO: Unable to read jessie_tcp@dns-test-service.dns-3377 from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 22:00:05.428: INFO: Unable to read jessie_udp@dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 22:00:05.430: INFO: Unable to read jessie_tcp@dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 22:00:05.433: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 22:00:05.436: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3377.svc from pod dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee: the server could not find the requested resource (get pods dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee) Apr 22 22:00:05.455: INFO: Lookups using dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3377 wheezy_tcp@dns-test-service.dns-3377 wheezy_udp@dns-test-service.dns-3377.svc wheezy_tcp@dns-test-service.dns-3377.svc wheezy_udp@_http._tcp.dns-test-service.dns-3377.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3377.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3377 jessie_tcp@dns-test-service.dns-3377 jessie_udp@dns-test-service.dns-3377.svc jessie_tcp@dns-test-service.dns-3377.svc jessie_udp@_http._tcp.dns-test-service.dns-3377.svc jessie_tcp@_http._tcp.dns-test-service.dns-3377.svc] Apr 22 22:00:10.515: INFO: DNS probes using dns-3377/dns-test-c6504cb6-f46c-4d2f-bb99-be33d203edee succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:00:11.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3377" for this suite. • [SLOW TEST:37.534 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":176,"skipped":2817,"failed":0} SSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:00:11.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:00:15.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2774" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":177,"skipped":2820,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:00:15.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 22 22:00:23.571: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 22 22:00:23.594: INFO: Pod pod-with-poststart-http-hook still exists Apr 22 22:00:25.595: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 22 22:00:25.599: INFO: Pod pod-with-poststart-http-hook still exists Apr 22 22:00:27.595: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 22 22:00:27.598: INFO: Pod pod-with-poststart-http-hook still exists Apr 22 22:00:29.595: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 22 22:00:29.598: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:00:29.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9202" for this suite. • [SLOW TEST:14.151 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":178,"skipped":2880,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:00:29.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 22 22:00:29.696: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8c19e91e-9cd4-4653-861e-a90fd63493b4" in namespace "downward-api-8637" to be "success or failure" Apr 22 22:00:29.701: INFO: Pod "downwardapi-volume-8c19e91e-9cd4-4653-861e-a90fd63493b4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.375664ms Apr 22 22:00:31.707: INFO: Pod "downwardapi-volume-8c19e91e-9cd4-4653-861e-a90fd63493b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010689268s Apr 22 22:00:33.717: INFO: Pod "downwardapi-volume-8c19e91e-9cd4-4653-861e-a90fd63493b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020702769s STEP: Saw pod success Apr 22 22:00:33.717: INFO: Pod "downwardapi-volume-8c19e91e-9cd4-4653-861e-a90fd63493b4" satisfied condition "success or failure" Apr 22 22:00:33.720: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-8c19e91e-9cd4-4653-861e-a90fd63493b4 container client-container: STEP: delete the pod Apr 22 22:00:33.775: INFO: Waiting for pod downwardapi-volume-8c19e91e-9cd4-4653-861e-a90fd63493b4 to disappear Apr 22 22:00:33.788: INFO: Pod downwardapi-volume-8c19e91e-9cd4-4653-861e-a90fd63493b4 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:00:33.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8637" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":179,"skipped":2890,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:00:33.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:01:05.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9044" for this suite. STEP: Destroying namespace "nsdeletetest-5418" for this suite. Apr 22 22:01:05.134: INFO: Namespace nsdeletetest-5418 was already deleted STEP: Destroying namespace "nsdeletetest-1241" for this suite. • [SLOW TEST:31.342 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":180,"skipped":2896,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:01:05.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 22 22:01:05.207: INFO: Waiting up to 5m0s for pod "pod-8956a108-225d-427e-95c5-6d48f4897da1" in namespace "emptydir-2066" to be "success or failure" Apr 22 22:01:05.211: INFO: Pod "pod-8956a108-225d-427e-95c5-6d48f4897da1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.660806ms Apr 22 22:01:07.214: INFO: Pod "pod-8956a108-225d-427e-95c5-6d48f4897da1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007441969s Apr 22 22:01:09.219: INFO: Pod "pod-8956a108-225d-427e-95c5-6d48f4897da1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01181311s STEP: Saw pod success Apr 22 22:01:09.219: INFO: Pod "pod-8956a108-225d-427e-95c5-6d48f4897da1" satisfied condition "success or failure" Apr 22 22:01:09.222: INFO: Trying to get logs from node jerma-worker pod pod-8956a108-225d-427e-95c5-6d48f4897da1 container test-container: STEP: delete the pod Apr 22 22:01:09.256: INFO: Waiting for pod pod-8956a108-225d-427e-95c5-6d48f4897da1 to disappear Apr 22 22:01:09.268: INFO: Pod pod-8956a108-225d-427e-95c5-6d48f4897da1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:01:09.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2066" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":181,"skipped":2908,"failed":0} SSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:01:09.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:01:09.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8347" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":182,"skipped":2914,"failed":0} ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:01:09.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 22 22:01:09.491: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 22 22:01:12.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9168 create -f -' Apr 22 22:01:15.894: INFO: stderr: "" Apr 22 22:01:15.894: INFO: stdout: "e2e-test-crd-publish-openapi-8212-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 22 22:01:15.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9168 delete e2e-test-crd-publish-openapi-8212-crds test-cr' Apr 22 22:01:16.003: INFO: stderr: "" Apr 22 22:01:16.003: INFO: stdout: "e2e-test-crd-publish-openapi-8212-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Apr 22 22:01:16.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9168 apply -f -' Apr 22 22:01:16.232: INFO: stderr: "" Apr 22 22:01:16.232: INFO: stdout: "e2e-test-crd-publish-openapi-8212-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 22 22:01:16.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9168 delete e2e-test-crd-publish-openapi-8212-crds test-cr' Apr 22 22:01:16.370: INFO: stderr: "" Apr 22 22:01:16.370: INFO: stdout: "e2e-test-crd-publish-openapi-8212-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Apr 22 22:01:16.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8212-crds' Apr 22 22:01:16.606: INFO: stderr: "" Apr 22 22:01:16.606: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8212-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:01:18.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9168" for this suite. • [SLOW TEST:9.108 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":183,"skipped":2914,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:01:18.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 22 22:01:18.619: INFO: Waiting up to 5m0s for pod "busybox-user-65534-635e0120-907a-4661-a73d-fb49cdf419c8" in namespace "security-context-test-9355" to be "success or failure" Apr 22 22:01:18.624: INFO: Pod "busybox-user-65534-635e0120-907a-4661-a73d-fb49cdf419c8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.444125ms Apr 22 22:01:20.660: INFO: Pod "busybox-user-65534-635e0120-907a-4661-a73d-fb49cdf419c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04041399s Apr 22 22:01:22.664: INFO: Pod "busybox-user-65534-635e0120-907a-4661-a73d-fb49cdf419c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044761033s Apr 22 22:01:22.664: INFO: Pod "busybox-user-65534-635e0120-907a-4661-a73d-fb49cdf419c8" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:01:22.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9355" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":184,"skipped":2926,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:01:22.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 22 22:01:22.759: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 22 22:01:22.768: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:01:22.821: INFO: Number of nodes with available pods: 0 Apr 22 22:01:22.821: INFO: Node jerma-worker is running more than one daemon pod Apr 22 22:01:23.825: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:01:23.828: INFO: Number of nodes with available pods: 0 Apr 22 22:01:23.828: INFO: Node jerma-worker is running more than one daemon pod Apr 22 22:01:24.826: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:01:24.829: INFO: Number of nodes with available pods: 0 Apr 22 22:01:24.829: INFO: Node jerma-worker is running more than one daemon pod Apr 22 22:01:25.825: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:01:25.829: INFO: Number of nodes with available pods: 1 Apr 22 22:01:25.829: INFO: Node jerma-worker2 is running more than one daemon pod Apr 22 22:01:26.826: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:01:26.830: INFO: Number of nodes with available pods: 2 Apr 22 22:01:26.830: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 22 22:01:26.930: INFO: Wrong image for pod: daemon-set-jld6k. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 22 22:01:26.930: INFO: Wrong image for pod: daemon-set-rqxbx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 22 22:01:26.937: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:01:27.941: INFO: Wrong image for pod: daemon-set-jld6k. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 22 22:01:27.941: INFO: Wrong image for pod: daemon-set-rqxbx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 22 22:01:27.952: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:01:28.941: INFO: Wrong image for pod: daemon-set-jld6k. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 22 22:01:28.941: INFO: Pod daemon-set-jld6k is not available Apr 22 22:01:28.941: INFO: Wrong image for pod: daemon-set-rqxbx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 22 22:01:28.945: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:01:29.942: INFO: Wrong image for pod: daemon-set-rqxbx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 22 22:01:29.942: INFO: Pod daemon-set-txhck is not available Apr 22 22:01:29.946: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:01:30.942: INFO: Wrong image for pod: daemon-set-rqxbx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 22 22:01:30.942: INFO: Pod daemon-set-txhck is not available Apr 22 22:01:30.946: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:01:31.942: INFO: Wrong image for pod: daemon-set-rqxbx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 22 22:01:31.942: INFO: Pod daemon-set-txhck is not available Apr 22 22:01:31.946: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:01:32.942: INFO: Wrong image for pod: daemon-set-rqxbx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 22 22:01:32.946: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:01:34.536: INFO: Wrong image for pod: daemon-set-rqxbx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 22 22:01:34.673: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:01:34.941: INFO: Wrong image for pod: daemon-set-rqxbx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 22 22:01:34.942: INFO: Pod daemon-set-rqxbx is not available Apr 22 22:01:34.945: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:01:35.942: INFO: Wrong image for pod: daemon-set-rqxbx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 22 22:01:35.942: INFO: Pod daemon-set-rqxbx is not available Apr 22 22:01:35.946: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:01:36.941: INFO: Wrong image for pod: daemon-set-rqxbx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 22 22:01:36.942: INFO: Pod daemon-set-rqxbx is not available Apr 22 22:01:36.946: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:01:37.942: INFO: Wrong image for pod: daemon-set-rqxbx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 22 22:01:37.942: INFO: Pod daemon-set-rqxbx is not available Apr 22 22:01:37.946: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:01:38.941: INFO: Wrong image for pod: daemon-set-rqxbx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 22 22:01:38.941: INFO: Pod daemon-set-rqxbx is not available Apr 22 22:01:38.945: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:01:39.941: INFO: Pod daemon-set-zm7c6 is not available Apr 22 22:01:39.946: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 22 22:01:39.950: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:01:39.953: INFO: Number of nodes with available pods: 1 Apr 22 22:01:39.953: INFO: Node jerma-worker2 is running more than one daemon pod Apr 22 22:01:40.958: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:01:40.961: INFO: Number of nodes with available pods: 1 Apr 22 22:01:40.962: INFO: Node jerma-worker2 is running more than one daemon pod Apr 22 22:01:41.958: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:01:41.962: INFO: Number of nodes with available pods: 1 Apr 22 22:01:41.962: INFO: Node jerma-worker2 is running more than one daemon pod Apr 22 22:01:42.972: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:01:42.975: INFO: Number of nodes with available pods: 2 Apr 22 22:01:42.975: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3677, will wait for the garbage collector to delete the pods Apr 22 22:01:43.056: INFO: Deleting DaemonSet.extensions daemon-set took: 15.403463ms Apr 22 22:01:43.356: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.24704ms Apr 22 22:01:49.559: INFO: Number of nodes with available pods: 0 Apr 22 22:01:49.559: INFO: Number of running nodes: 0, number of available pods: 0 Apr 22 22:01:49.561: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3677/daemonsets","resourceVersion":"10233420"},"items":null} Apr 22 22:01:49.563: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3677/pods","resourceVersion":"10233420"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:01:49.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3677" for this suite. • [SLOW TEST:26.906 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":185,"skipped":2951,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:01:49.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 22 22:01:49.631: INFO: Creating deployment "test-recreate-deployment" Apr 22 22:01:49.666: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 22 22:01:49.702: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Apr 22 22:01:51.709: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 22 22:01:51.712: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723189709, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723189709, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723189709, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723189709, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 22:01:53.716: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 22 22:01:53.723: INFO: Updating deployment test-recreate-deployment Apr 22 22:01:53.723: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 22 22:01:54.226: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-9403 /apis/apps/v1/namespaces/deployment-9403/deployments/test-recreate-deployment a59ff719-3710-4772-8bc7-ad268c9dcf33 10233477 2 2020-04-22 22:01:49 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0040ac148 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-22 22:01:53 +0000 UTC,LastTransitionTime:2020-04-22 22:01:53 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-04-22 22:01:53 +0000 UTC,LastTransitionTime:2020-04-22 22:01:49 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Apr 22 22:01:54.230: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-9403 /apis/apps/v1/namespaces/deployment-9403/replicasets/test-recreate-deployment-5f94c574ff a9d4d9a9-e9aa-4bd3-8ffd-6194622654d2 10233475 1 2020-04-22 22:01:53 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment a59ff719-3710-4772-8bc7-ad268c9dcf33 0xc002f2d5e7 0xc002f2d5e8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002f2d658 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 22 22:01:54.230: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 22 22:01:54.230: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-9403 /apis/apps/v1/namespaces/deployment-9403/replicasets/test-recreate-deployment-799c574856 d402c5d0-59e9-41e5-bfa9-0e3565b3242b 10233465 2 2020-04-22 22:01:49 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment a59ff719-3710-4772-8bc7-ad268c9dcf33 0xc002f2d6d7 0xc002f2d6d8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002f2d768 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 22 22:01:54.232: INFO: Pod "test-recreate-deployment-5f94c574ff-kfdwf" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-kfdwf test-recreate-deployment-5f94c574ff- deployment-9403 /api/v1/namespaces/deployment-9403/pods/test-recreate-deployment-5f94c574ff-kfdwf ffe4b43b-39bb-4dd0-ab38-8c966bda06f8 10233476 0 2020-04-22 22:01:53 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff a9d4d9a9-e9aa-4bd3-8ffd-6194622654d2 0xc002ded657 0xc002ded658}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8wvgr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8wvgr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8wvgr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:01:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:01:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:01:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:01:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-22 22:01:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:01:54.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9403" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":186,"skipped":2965,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:01:54.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-444d1687-7553-4f27-88f1-c35836cbdb9d STEP: Creating a pod to test consume secrets Apr 22 22:01:54.312: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a8d26290-2a54-41f9-9024-33b585e2168e" in namespace "projected-450" to be "success or failure" Apr 22 22:01:54.316: INFO: Pod "pod-projected-secrets-a8d26290-2a54-41f9-9024-33b585e2168e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.791849ms Apr 22 22:01:56.318: INFO: Pod "pod-projected-secrets-a8d26290-2a54-41f9-9024-33b585e2168e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006600423s Apr 22 22:01:58.322: INFO: Pod "pod-projected-secrets-a8d26290-2a54-41f9-9024-33b585e2168e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010417392s STEP: Saw pod success Apr 22 22:01:58.322: INFO: Pod "pod-projected-secrets-a8d26290-2a54-41f9-9024-33b585e2168e" satisfied condition "success or failure" Apr 22 22:01:58.325: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-a8d26290-2a54-41f9-9024-33b585e2168e container projected-secret-volume-test: STEP: delete the pod Apr 22 22:01:58.363: INFO: Waiting for pod pod-projected-secrets-a8d26290-2a54-41f9-9024-33b585e2168e to disappear Apr 22 22:01:58.382: INFO: Pod pod-projected-secrets-a8d26290-2a54-41f9-9024-33b585e2168e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:01:58.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-450" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":187,"skipped":2973,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:01:58.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-48d13afb-d9b4-417e-a16d-b1443395ab34 STEP: Creating a pod to test consume configMaps Apr 22 22:01:58.993: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ea39288b-9aa8-4b26-a39e-1e240c2efe71" in namespace "projected-7415" to be "success or failure" Apr 22 22:01:59.037: INFO: Pod "pod-projected-configmaps-ea39288b-9aa8-4b26-a39e-1e240c2efe71": Phase="Pending", Reason="", readiness=false. Elapsed: 44.446677ms Apr 22 22:02:01.041: INFO: Pod "pod-projected-configmaps-ea39288b-9aa8-4b26-a39e-1e240c2efe71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048311973s Apr 22 22:02:03.046: INFO: Pod "pod-projected-configmaps-ea39288b-9aa8-4b26-a39e-1e240c2efe71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052722859s STEP: Saw pod success Apr 22 22:02:03.046: INFO: Pod "pod-projected-configmaps-ea39288b-9aa8-4b26-a39e-1e240c2efe71" satisfied condition "success or failure" Apr 22 22:02:03.049: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-ea39288b-9aa8-4b26-a39e-1e240c2efe71 container projected-configmap-volume-test: STEP: delete the pod Apr 22 22:02:03.082: INFO: Waiting for pod pod-projected-configmaps-ea39288b-9aa8-4b26-a39e-1e240c2efe71 to disappear Apr 22 22:02:03.112: INFO: Pod pod-projected-configmaps-ea39288b-9aa8-4b26-a39e-1e240c2efe71 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:02:03.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7415" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":188,"skipped":2989,"failed":0} SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:02:03.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 22 22:02:03.230: INFO: Create a RollingUpdate DaemonSet Apr 22 22:02:03.234: INFO: Check that daemon pods launch on every node of the cluster Apr 22 22:02:03.239: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:02:03.244: INFO: Number of nodes with available pods: 0 Apr 22 22:02:03.244: INFO: Node jerma-worker is running more than one daemon pod Apr 22 22:02:04.251: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:02:04.256: INFO: Number of nodes with available pods: 0 Apr 22 22:02:04.256: INFO: Node jerma-worker is running more than one daemon pod Apr 22 22:02:05.251: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:02:05.254: INFO: Number of nodes with available pods: 0 Apr 22 22:02:05.254: INFO: Node jerma-worker is running more than one daemon pod Apr 22 22:02:06.250: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:02:06.253: INFO: Number of nodes with available pods: 0 Apr 22 22:02:06.253: INFO: Node jerma-worker is running more than one daemon pod Apr 22 22:02:07.248: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:02:07.253: INFO: Number of nodes with available pods: 2 Apr 22 22:02:07.253: INFO: Number of running nodes: 2, number of available pods: 2 Apr 22 22:02:07.253: INFO: Update the DaemonSet to trigger a rollout Apr 22 22:02:07.260: INFO: Updating DaemonSet daemon-set Apr 22 22:02:19.317: INFO: Roll back the DaemonSet before rollout is complete Apr 22 22:02:19.409: INFO: Updating DaemonSet daemon-set Apr 22 22:02:19.409: INFO: Make sure DaemonSet rollback is complete Apr 22 22:02:19.413: INFO: Wrong image for pod: daemon-set-hswq2. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 22 22:02:19.413: INFO: Pod daemon-set-hswq2 is not available Apr 22 22:02:19.419: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:02:20.423: INFO: Wrong image for pod: daemon-set-hswq2. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 22 22:02:20.423: INFO: Pod daemon-set-hswq2 is not available Apr 22 22:02:20.429: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:02:21.547: INFO: Wrong image for pod: daemon-set-hswq2. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 22 22:02:21.547: INFO: Pod daemon-set-hswq2 is not available Apr 22 22:02:21.552: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:02:22.423: INFO: Wrong image for pod: daemon-set-hswq2. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 22 22:02:22.424: INFO: Pod daemon-set-hswq2 is not available Apr 22 22:02:22.428: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:02:23.423: INFO: Wrong image for pod: daemon-set-hswq2. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 22 22:02:23.424: INFO: Pod daemon-set-hswq2 is not available Apr 22 22:02:23.427: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:02:24.425: INFO: Wrong image for pod: daemon-set-hswq2. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 22 22:02:24.426: INFO: Pod daemon-set-hswq2 is not available Apr 22 22:02:24.431: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:02:25.423: INFO: Wrong image for pod: daemon-set-hswq2. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 22 22:02:25.423: INFO: Pod daemon-set-hswq2 is not available Apr 22 22:02:25.428: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:02:26.423: INFO: Wrong image for pod: daemon-set-hswq2. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 22 22:02:26.423: INFO: Pod daemon-set-hswq2 is not available Apr 22 22:02:26.428: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:02:27.423: INFO: Wrong image for pod: daemon-set-hswq2. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 22 22:02:27.423: INFO: Pod daemon-set-hswq2 is not available Apr 22 22:02:27.427: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:02:28.422: INFO: Wrong image for pod: daemon-set-hswq2. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 22 22:02:28.422: INFO: Pod daemon-set-hswq2 is not available Apr 22 22:02:28.425: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:02:29.427: INFO: Pod daemon-set-bk5zp is not available Apr 22 22:02:29.430: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5929, will wait for the garbage collector to delete the pods Apr 22 22:02:29.500: INFO: Deleting DaemonSet.extensions daemon-set took: 7.869473ms Apr 22 22:02:29.900: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.286447ms Apr 22 22:02:39.529: INFO: Number of nodes with available pods: 0 Apr 22 22:02:39.529: INFO: Number of running nodes: 0, number of available pods: 0 Apr 22 22:02:39.532: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5929/daemonsets","resourceVersion":"10233793"},"items":null} Apr 22 22:02:39.533: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5929/pods","resourceVersion":"10233793"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:02:39.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5929" for this suite. • [SLOW TEST:36.428 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":189,"skipped":2996,"failed":0} [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:02:39.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:02:43.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2791" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":190,"skipped":2996,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:02:43.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 22 22:02:43.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-1176' Apr 22 22:02:43.962: INFO: stderr: "" Apr 22 22:02:43.962: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Apr 22 22:02:49.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-1176 -o json' Apr 22 22:02:49.108: INFO: stderr: "" Apr 22 22:02:49.108: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-04-22T22:02:43Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-1176\",\n \"resourceVersion\": \"10233962\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-1176/pods/e2e-test-httpd-pod\",\n \"uid\": \"53bc350a-165f-466d-aec0-7b6389ce7c19\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-gr7ll\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-gr7ll\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-gr7ll\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-22T22:02:43Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-22T22:02:47Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-22T22:02:47Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-22T22:02:43Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://d4dba5de91c4088455ff82557118196c83bc4fb4bc618a832bf372e401eba7c1\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-04-22T22:02:46Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.8\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.229\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.229\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-04-22T22:02:43Z\"\n }\n}\n" STEP: replace the image in the pod Apr 22 22:02:49.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-1176' Apr 22 22:02:49.441: INFO: stderr: "" Apr 22 22:02:49.441: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795 Apr 22 22:02:49.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-1176' Apr 22 22:02:59.507: INFO: stderr: "" Apr 22 22:02:59.507: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:02:59.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1176" for this suite. • [SLOW TEST:15.722 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1786 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":191,"skipped":3002,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:02:59.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-8e999251-1fe3-4c58-9fde-67f7c810c121 STEP: Creating a pod to test consume secrets Apr 22 22:02:59.628: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-842b40f3-b8e3-45e8-b22c-cec6bd25ab3e" in namespace "projected-4783" to be "success or failure" Apr 22 22:02:59.635: INFO: Pod "pod-projected-secrets-842b40f3-b8e3-45e8-b22c-cec6bd25ab3e": Phase="Pending", Reason="", readiness=false. Elapsed: 7.435356ms Apr 22 22:03:01.640: INFO: Pod "pod-projected-secrets-842b40f3-b8e3-45e8-b22c-cec6bd25ab3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011845414s Apr 22 22:03:03.644: INFO: Pod "pod-projected-secrets-842b40f3-b8e3-45e8-b22c-cec6bd25ab3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016395382s STEP: Saw pod success Apr 22 22:03:03.644: INFO: Pod "pod-projected-secrets-842b40f3-b8e3-45e8-b22c-cec6bd25ab3e" satisfied condition "success or failure" Apr 22 22:03:03.648: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-842b40f3-b8e3-45e8-b22c-cec6bd25ab3e container projected-secret-volume-test: STEP: delete the pod Apr 22 22:03:03.665: INFO: Waiting for pod pod-projected-secrets-842b40f3-b8e3-45e8-b22c-cec6bd25ab3e to disappear Apr 22 22:03:03.690: INFO: Pod pod-projected-secrets-842b40f3-b8e3-45e8-b22c-cec6bd25ab3e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:03:03.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4783" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":3011,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:03:03.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:03:14.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4849" for this suite. • [SLOW TEST:11.193 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":193,"skipped":3021,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:03:14.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:03:43.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6063" for this suite. • [SLOW TEST:28.562 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":194,"skipped":3086,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:03:43.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-7913 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-7913 Apr 22 22:03:43.621: INFO: Found 0 stateful pods, waiting for 1 Apr 22 22:03:53.626: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 22 22:03:53.648: INFO: Deleting all statefulset in ns statefulset-7913 Apr 22 22:03:53.654: INFO: Scaling statefulset ss to 0 Apr 22 22:04:03.700: INFO: Waiting for statefulset status.replicas updated to 0 Apr 22 22:04:03.703: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:04:03.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7913" for this suite. • [SLOW TEST:20.295 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":195,"skipped":3109,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:04:03.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-8430 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-8430 STEP: creating replication controller externalsvc in namespace services-8430 I0422 22:04:03.888651 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-8430, replica count: 2 I0422 22:04:06.939081 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 22:04:09.939343 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Apr 22 22:04:10.758: INFO: Creating new exec pod Apr 22 22:04:14.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8430 execpodff47h -- /bin/sh -x -c nslookup clusterip-service' Apr 22 22:04:14.996: INFO: stderr: "I0422 22:04:14.925443 2366 log.go:172] (0xc0001051e0) (0xc000976000) Create stream\nI0422 22:04:14.925515 2366 log.go:172] (0xc0001051e0) (0xc000976000) Stream added, broadcasting: 1\nI0422 22:04:14.927215 2366 log.go:172] (0xc0001051e0) Reply frame received for 1\nI0422 22:04:14.927274 2366 log.go:172] (0xc0001051e0) (0xc0005b7ae0) Create stream\nI0422 22:04:14.927300 2366 log.go:172] (0xc0001051e0) (0xc0005b7ae0) Stream added, broadcasting: 3\nI0422 22:04:14.928201 2366 log.go:172] (0xc0001051e0) Reply frame received for 3\nI0422 22:04:14.928238 2366 log.go:172] (0xc0001051e0) (0xc00001a000) Create stream\nI0422 22:04:14.928251 2366 log.go:172] (0xc0001051e0) (0xc00001a000) Stream added, broadcasting: 5\nI0422 22:04:14.929056 2366 log.go:172] (0xc0001051e0) Reply frame received for 5\nI0422 22:04:14.981492 2366 log.go:172] (0xc0001051e0) Data frame received for 5\nI0422 22:04:14.981514 2366 log.go:172] (0xc00001a000) (5) Data frame handling\nI0422 22:04:14.981526 2366 log.go:172] (0xc00001a000) (5) Data frame sent\n+ nslookup clusterip-service\nI0422 22:04:14.988077 2366 log.go:172] (0xc0001051e0) Data frame received for 3\nI0422 22:04:14.988092 2366 log.go:172] (0xc0005b7ae0) (3) Data frame handling\nI0422 22:04:14.988104 2366 log.go:172] (0xc0005b7ae0) (3) Data frame sent\nI0422 22:04:14.988878 2366 log.go:172] (0xc0001051e0) Data frame received for 3\nI0422 22:04:14.988899 2366 log.go:172] (0xc0005b7ae0) (3) Data frame handling\nI0422 22:04:14.988922 2366 log.go:172] (0xc0005b7ae0) (3) Data frame sent\nI0422 22:04:14.989311 2366 log.go:172] (0xc0001051e0) Data frame received for 3\nI0422 22:04:14.989338 2366 log.go:172] (0xc0005b7ae0) (3) Data frame handling\nI0422 22:04:14.989585 2366 log.go:172] (0xc0001051e0) Data frame received for 5\nI0422 22:04:14.989603 2366 log.go:172] (0xc00001a000) (5) Data frame handling\nI0422 22:04:14.991268 2366 log.go:172] (0xc0001051e0) Data frame received for 1\nI0422 22:04:14.991294 2366 log.go:172] (0xc000976000) (1) Data frame handling\nI0422 22:04:14.991329 2366 log.go:172] (0xc000976000) (1) Data frame sent\nI0422 22:04:14.991412 2366 log.go:172] (0xc0001051e0) (0xc000976000) Stream removed, broadcasting: 1\nI0422 22:04:14.991455 2366 log.go:172] (0xc0001051e0) Go away received\nI0422 22:04:14.991964 2366 log.go:172] (0xc0001051e0) (0xc000976000) Stream removed, broadcasting: 1\nI0422 22:04:14.991984 2366 log.go:172] (0xc0001051e0) (0xc0005b7ae0) Stream removed, broadcasting: 3\nI0422 22:04:14.991993 2366 log.go:172] (0xc0001051e0) (0xc00001a000) Stream removed, broadcasting: 5\n" Apr 22 22:04:14.996: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-8430.svc.cluster.local\tcanonical name = externalsvc.services-8430.svc.cluster.local.\nName:\texternalsvc.services-8430.svc.cluster.local\nAddress: 10.96.84.146\n\n" STEP: deleting ReplicationController externalsvc in namespace services-8430, will wait for the garbage collector to delete the pods Apr 22 22:04:15.055: INFO: Deleting ReplicationController externalsvc took: 5.984847ms Apr 22 22:04:15.156: INFO: Terminating ReplicationController externalsvc pods took: 100.256298ms Apr 22 22:04:29.333: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:04:29.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8430" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:25.605 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":196,"skipped":3159,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:04:29.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:04:33.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4510" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3175,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:04:33.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1720.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1720.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1720.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1720.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1720.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1720.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1720.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1720.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1720.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1720.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1720.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 209.143.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.143.209_udp@PTR;check="$$(dig +tcp +noall +answer +search 209.143.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.143.209_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1720.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1720.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1720.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1720.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1720.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1720.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1720.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1720.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1720.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1720.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1720.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 209.143.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.143.209_udp@PTR;check="$$(dig +tcp +noall +answer +search 209.143.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.143.209_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 22 22:04:39.658: INFO: Unable to read wheezy_udp@dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:04:39.661: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:04:39.664: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:04:39.667: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:04:39.689: INFO: Unable to read jessie_udp@dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:04:39.691: INFO: Unable to read jessie_tcp@dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:04:39.694: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:04:39.697: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:04:39.714: INFO: Lookups using dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce failed for: [wheezy_udp@dns-test-service.dns-1720.svc.cluster.local wheezy_tcp@dns-test-service.dns-1720.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local jessie_udp@dns-test-service.dns-1720.svc.cluster.local jessie_tcp@dns-test-service.dns-1720.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local] Apr 22 22:04:44.720: INFO: Unable to read wheezy_udp@dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:04:44.724: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:04:44.727: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:04:44.731: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:04:44.753: INFO: Unable to read jessie_udp@dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:04:44.756: INFO: Unable to read jessie_tcp@dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:04:44.758: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:04:44.760: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:04:44.778: INFO: Lookups using dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce failed for: [wheezy_udp@dns-test-service.dns-1720.svc.cluster.local wheezy_tcp@dns-test-service.dns-1720.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local jessie_udp@dns-test-service.dns-1720.svc.cluster.local jessie_tcp@dns-test-service.dns-1720.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local] Apr 22 22:04:49.719: INFO: Unable to read wheezy_udp@dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:04:49.723: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:04:49.726: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:04:49.729: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:04:49.753: INFO: Unable to read jessie_udp@dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:04:49.757: INFO: Unable to read jessie_tcp@dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:04:49.760: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:04:49.763: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:04:49.782: INFO: Lookups using dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce failed for: [wheezy_udp@dns-test-service.dns-1720.svc.cluster.local wheezy_tcp@dns-test-service.dns-1720.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local jessie_udp@dns-test-service.dns-1720.svc.cluster.local jessie_tcp@dns-test-service.dns-1720.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local] Apr 22 22:04:54.719: INFO: Unable to read wheezy_udp@dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:04:54.723: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:04:54.726: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:04:54.729: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:04:54.750: INFO: Unable to read jessie_udp@dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:04:54.752: INFO: Unable to read jessie_tcp@dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:04:54.756: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:04:54.759: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:04:54.778: INFO: Lookups using dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce failed for: [wheezy_udp@dns-test-service.dns-1720.svc.cluster.local wheezy_tcp@dns-test-service.dns-1720.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local jessie_udp@dns-test-service.dns-1720.svc.cluster.local jessie_tcp@dns-test-service.dns-1720.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local] Apr 22 22:04:59.719: INFO: Unable to read wheezy_udp@dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:04:59.722: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:04:59.724: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:04:59.727: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:04:59.747: INFO: Unable to read jessie_udp@dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:04:59.750: INFO: Unable to read jessie_tcp@dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:04:59.753: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:04:59.756: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:04:59.772: INFO: Lookups using dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce failed for: [wheezy_udp@dns-test-service.dns-1720.svc.cluster.local wheezy_tcp@dns-test-service.dns-1720.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local jessie_udp@dns-test-service.dns-1720.svc.cluster.local jessie_tcp@dns-test-service.dns-1720.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local] Apr 22 22:05:04.720: INFO: Unable to read wheezy_udp@dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:05:04.724: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:05:04.728: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:05:04.731: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:05:04.752: INFO: Unable to read jessie_udp@dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:05:04.755: INFO: Unable to read jessie_tcp@dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:05:04.758: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:05:04.761: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local from pod dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce: the server could not find the requested resource (get pods dns-test-38075f2e-bbaa-472b-9163-6786730b97ce) Apr 22 22:05:04.777: INFO: Lookups using dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce failed for: [wheezy_udp@dns-test-service.dns-1720.svc.cluster.local wheezy_tcp@dns-test-service.dns-1720.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local jessie_udp@dns-test-service.dns-1720.svc.cluster.local jessie_tcp@dns-test-service.dns-1720.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1720.svc.cluster.local] Apr 22 22:05:09.775: INFO: DNS probes using dns-1720/dns-test-38075f2e-bbaa-472b-9163-6786730b97ce succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:05:10.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1720" for this suite. • [SLOW TEST:36.884 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":198,"skipped":3204,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:05:10.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 22 22:05:10.466: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 22 22:05:12.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-875 create -f -' Apr 22 22:05:15.127: INFO: stderr: "" Apr 22 22:05:15.127: INFO: stdout: "e2e-test-crd-publish-openapi-9914-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 22 22:05:15.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-875 delete e2e-test-crd-publish-openapi-9914-crds test-cr' Apr 22 22:05:15.229: INFO: stderr: "" Apr 22 22:05:15.229: INFO: stdout: "e2e-test-crd-publish-openapi-9914-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Apr 22 22:05:15.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-875 apply -f -' Apr 22 22:05:16.342: INFO: stderr: "" Apr 22 22:05:16.342: INFO: stdout: "e2e-test-crd-publish-openapi-9914-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 22 22:05:16.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-875 delete e2e-test-crd-publish-openapi-9914-crds test-cr' Apr 22 22:05:16.439: INFO: stderr: "" Apr 22 22:05:16.439: INFO: stdout: "e2e-test-crd-publish-openapi-9914-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 22 22:05:16.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9914-crds' Apr 22 22:05:16.677: INFO: stderr: "" Apr 22 22:05:16.677: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9914-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:05:19.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-875" for this suite. • [SLOW TEST:9.184 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":199,"skipped":3208,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:05:19.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 22 22:05:19.634: INFO: Waiting up to 5m0s for pod "downwardapi-volume-68d364f8-c389-43ab-b032-de98e2b7be01" in namespace "projected-4813" to be "success or failure" Apr 22 22:05:19.638: INFO: Pod "downwardapi-volume-68d364f8-c389-43ab-b032-de98e2b7be01": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154222ms Apr 22 22:05:21.645: INFO: Pod "downwardapi-volume-68d364f8-c389-43ab-b032-de98e2b7be01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011657154s Apr 22 22:05:23.650: INFO: Pod "downwardapi-volume-68d364f8-c389-43ab-b032-de98e2b7be01": Phase="Running", Reason="", readiness=true. Elapsed: 4.016321192s Apr 22 22:05:25.654: INFO: Pod "downwardapi-volume-68d364f8-c389-43ab-b032-de98e2b7be01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020195661s STEP: Saw pod success Apr 22 22:05:25.654: INFO: Pod "downwardapi-volume-68d364f8-c389-43ab-b032-de98e2b7be01" satisfied condition "success or failure" Apr 22 22:05:25.657: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-68d364f8-c389-43ab-b032-de98e2b7be01 container client-container: STEP: delete the pod Apr 22 22:05:25.706: INFO: Waiting for pod downwardapi-volume-68d364f8-c389-43ab-b032-de98e2b7be01 to disappear Apr 22 22:05:25.716: INFO: Pod downwardapi-volume-68d364f8-c389-43ab-b032-de98e2b7be01 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:05:25.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4813" for this suite. • [SLOW TEST:6.152 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":200,"skipped":3214,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:05:25.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 22 22:05:26.378: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 22 22:05:28.388: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723189926, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723189926, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723189926, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723189926, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 22 22:05:31.421: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:05:31.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7394" for this suite. STEP: Destroying namespace "webhook-7394-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.796 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":201,"skipped":3240,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:05:31.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server Apr 22 22:05:31.581: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:05:31.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9540" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":202,"skipped":3257,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:05:31.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-8747 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8747 to expose endpoints map[] Apr 22 22:05:31.793: INFO: Get endpoints failed (10.388149ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Apr 22 22:05:32.797: INFO: successfully validated that service multi-endpoint-test in namespace services-8747 exposes endpoints map[] (1.014581368s elapsed) STEP: Creating pod pod1 in namespace services-8747 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8747 to expose endpoints map[pod1:[100]] Apr 22 22:05:35.879: INFO: successfully validated that service multi-endpoint-test in namespace services-8747 exposes endpoints map[pod1:[100]] (3.075967524s elapsed) STEP: Creating pod pod2 in namespace services-8747 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8747 to expose endpoints map[pod1:[100] pod2:[101]] Apr 22 22:05:40.011: INFO: successfully validated that service multi-endpoint-test in namespace services-8747 exposes endpoints map[pod1:[100] pod2:[101]] (4.103088026s elapsed) STEP: Deleting pod pod1 in namespace services-8747 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8747 to expose endpoints map[pod2:[101]] Apr 22 22:05:41.052: INFO: successfully validated that service multi-endpoint-test in namespace services-8747 exposes endpoints map[pod2:[101]] (1.037465038s elapsed) STEP: Deleting pod pod2 in namespace services-8747 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8747 to expose endpoints map[] Apr 22 22:05:42.172: INFO: successfully validated that service multi-endpoint-test in namespace services-8747 exposes endpoints map[] (1.115613824s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:05:42.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8747" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:10.536 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":203,"skipped":3274,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:05:42.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-3af6fc33-3bfa-45de-9f61-31b0de7a51fe STEP: Creating a pod to test consume secrets Apr 22 22:05:42.273: INFO: Waiting up to 5m0s for pod "pod-secrets-4640f370-3829-434c-aa31-80ff22fc6f34" in namespace "secrets-8935" to be "success or failure" Apr 22 22:05:42.304: INFO: Pod "pod-secrets-4640f370-3829-434c-aa31-80ff22fc6f34": Phase="Pending", Reason="", readiness=false. Elapsed: 31.051274ms Apr 22 22:05:44.322: INFO: Pod "pod-secrets-4640f370-3829-434c-aa31-80ff22fc6f34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049476366s Apr 22 22:05:46.326: INFO: Pod "pod-secrets-4640f370-3829-434c-aa31-80ff22fc6f34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053112377s STEP: Saw pod success Apr 22 22:05:46.326: INFO: Pod "pod-secrets-4640f370-3829-434c-aa31-80ff22fc6f34" satisfied condition "success or failure" Apr 22 22:05:46.346: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-4640f370-3829-434c-aa31-80ff22fc6f34 container secret-volume-test: STEP: delete the pod Apr 22 22:05:46.366: INFO: Waiting for pod pod-secrets-4640f370-3829-434c-aa31-80ff22fc6f34 to disappear Apr 22 22:05:46.370: INFO: Pod pod-secrets-4640f370-3829-434c-aa31-80ff22fc6f34 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:05:46.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8935" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":204,"skipped":3298,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:05:46.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 22 22:05:46.479: INFO: Waiting up to 5m0s for pod "pod-47c608c8-f57b-417b-89fe-a5998d3d68ea" in namespace "emptydir-103" to be "success or failure" Apr 22 22:05:46.483: INFO: Pod "pod-47c608c8-f57b-417b-89fe-a5998d3d68ea": Phase="Pending", Reason="", readiness=false. Elapsed: 3.472241ms Apr 22 22:05:48.507: INFO: Pod "pod-47c608c8-f57b-417b-89fe-a5998d3d68ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028040576s Apr 22 22:05:50.512: INFO: Pod "pod-47c608c8-f57b-417b-89fe-a5998d3d68ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032284733s STEP: Saw pod success Apr 22 22:05:50.512: INFO: Pod "pod-47c608c8-f57b-417b-89fe-a5998d3d68ea" satisfied condition "success or failure" Apr 22 22:05:50.515: INFO: Trying to get logs from node jerma-worker2 pod pod-47c608c8-f57b-417b-89fe-a5998d3d68ea container test-container: STEP: delete the pod Apr 22 22:05:50.532: INFO: Waiting for pod pod-47c608c8-f57b-417b-89fe-a5998d3d68ea to disappear Apr 22 22:05:50.537: INFO: Pod pod-47c608c8-f57b-417b-89fe-a5998d3d68ea no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:05:50.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-103" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":205,"skipped":3308,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:05:50.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Apr 22 22:05:55.654: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:05:55.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6263" for this suite. • [SLOW TEST:5.188 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":206,"skipped":3313,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:05:55.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 22 22:05:55.841: INFO: Creating deployment "webserver-deployment" Apr 22 22:05:55.862: INFO: Waiting for observed generation 1 Apr 22 22:05:57.921: INFO: Waiting for all required pods to come up Apr 22 22:05:57.926: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Apr 22 22:06:07.987: INFO: Waiting for deployment "webserver-deployment" to complete Apr 22 22:06:07.992: INFO: Updating deployment "webserver-deployment" with a non-existent image Apr 22 22:06:07.999: INFO: Updating deployment webserver-deployment Apr 22 22:06:07.999: INFO: Waiting for observed generation 2 Apr 22 22:06:10.009: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 22 22:06:10.012: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 22 22:06:10.014: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 22 22:06:10.020: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 22 22:06:10.020: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 22 22:06:10.022: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 22 22:06:10.043: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Apr 22 22:06:10.043: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Apr 22 22:06:10.050: INFO: Updating deployment webserver-deployment Apr 22 22:06:10.050: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Apr 22 22:06:10.074: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 22 22:06:10.089: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 22 22:06:10.211: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-7229 /apis/apps/v1/namespaces/deployment-7229/deployments/webserver-deployment 0a196a12-2b22-497d-b3bd-a0e8a7e18ad5 10235385 3 2020-04-22 22:05:55 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0040ace08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-04-22 22:06:08 +0000 UTC,LastTransitionTime:2020-04-22 22:05:55 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-22 22:06:10 +0000 UTC,LastTransitionTime:2020-04-22 22:06:10 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Apr 22 22:06:10.413: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-7229 /apis/apps/v1/namespaces/deployment-7229/replicasets/webserver-deployment-c7997dcc8 6a774aa8-400d-4102-a8bd-42782a4df4a2 10235418 3 2020-04-22 22:06:08 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 0a196a12-2b22-497d-b3bd-a0e8a7e18ad5 0xc0040ad2d7 0xc0040ad2d8}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0040ad348 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 22 22:06:10.413: INFO: All old ReplicaSets of Deployment "webserver-deployment": Apr 22 22:06:10.413: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-7229 /apis/apps/v1/namespaces/deployment-7229/replicasets/webserver-deployment-595b5b9587 293189c8-5ba2-438e-9906-ced6b08e4570 10235416 3 2020-04-22 22:05:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 0a196a12-2b22-497d-b3bd-a0e8a7e18ad5 0xc0040ad217 0xc0040ad218}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0040ad278 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Apr 22 22:06:10.473: INFO: Pod "webserver-deployment-595b5b9587-52nfn" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-52nfn webserver-deployment-595b5b9587- deployment-7229 /api/v1/namespaces/deployment-7229/pods/webserver-deployment-595b5b9587-52nfn b080f6b9-7c11-4cbc-97f1-587271f0d0fb 10235282 0 2020-04-22 22:05:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 293189c8-5ba2-438e-9906-ced6b08e4570 0xc004a69747 0xc004a69748}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lbvl5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lbvl5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lbvl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:05:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:05:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.244,StartTime:2020-04-22 22:05:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-22 22:06:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d587f56e6f4d4ebe77f0ef6762dcb197c00fb01c29742e88daf5141f29ae963a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.244,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:06:10.473: INFO: Pod "webserver-deployment-595b5b9587-77sbg" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-77sbg webserver-deployment-595b5b9587- deployment-7229 /api/v1/namespaces/deployment-7229/pods/webserver-deployment-595b5b9587-77sbg d15e459c-235c-4128-99fd-9801300cb54e 10235391 0 2020-04-22 22:06:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 293189c8-5ba2-438e-9906-ced6b08e4570 0xc004a698d7 0xc004a698d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lbvl5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lbvl5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lbvl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:06:10.473: INFO: Pod "webserver-deployment-595b5b9587-88sxm" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-88sxm webserver-deployment-595b5b9587- deployment-7229 /api/v1/namespaces/deployment-7229/pods/webserver-deployment-595b5b9587-88sxm 03654508-ee6b-4424-a38f-707342f40721 10235413 0 2020-04-22 22:06:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 293189c8-5ba2-438e-9906-ced6b08e4570 0xc004a69a07 0xc004a69a08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lbvl5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lbvl5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lbvl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:06:10.474: INFO: Pod "webserver-deployment-595b5b9587-bz9db" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bz9db webserver-deployment-595b5b9587- deployment-7229 /api/v1/namespaces/deployment-7229/pods/webserver-deployment-595b5b9587-bz9db 4d3993be-b228-4c56-933c-b12ce0e177d4 10235429 0 2020-04-22 22:06:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 293189c8-5ba2-438e-9906-ced6b08e4570 0xc004a69b27 0xc004a69b28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lbvl5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lbvl5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lbvl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-22 22:06:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:06:10.474: INFO: Pod "webserver-deployment-595b5b9587-c7wfm" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-c7wfm webserver-deployment-595b5b9587- deployment-7229 /api/v1/namespaces/deployment-7229/pods/webserver-deployment-595b5b9587-c7wfm 09e1d0f3-f783-4d7f-b707-9553f14c7985 10235254 0 2020-04-22 22:05:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 293189c8-5ba2-438e-9906-ced6b08e4570 0xc004a69c87 0xc004a69c88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lbvl5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lbvl5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lbvl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:05:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:05:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.128,StartTime:2020-04-22 22:05:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-22 22:06:01 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4ac7afb34773d30c6ea3b816dd5d0f69f04558db3561e154867678405b2ca624,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.128,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:06:10.474: INFO: Pod "webserver-deployment-595b5b9587-c8lcj" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-c8lcj webserver-deployment-595b5b9587- deployment-7229 /api/v1/namespaces/deployment-7229/pods/webserver-deployment-595b5b9587-c8lcj 9044d8a4-d375-4706-8214-66035c9b72b0 10235396 0 2020-04-22 22:06:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 293189c8-5ba2-438e-9906-ced6b08e4570 0xc004a69e07 0xc004a69e08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lbvl5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lbvl5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lbvl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:06:10.474: INFO: Pod "webserver-deployment-595b5b9587-cgmxm" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-cgmxm webserver-deployment-595b5b9587- deployment-7229 /api/v1/namespaces/deployment-7229/pods/webserver-deployment-595b5b9587-cgmxm 91453768-5886-4971-8eee-92aee8adf0b4 10235256 0 2020-04-22 22:05:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 293189c8-5ba2-438e-9906-ced6b08e4570 0xc004a69f27 0xc004a69f28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lbvl5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lbvl5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lbvl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:05:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:05:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.241,StartTime:2020-04-22 22:05:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-22 22:06:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://80646218a38f9c978c208bb8f369455eb1d2576fede5cd7333d7e6986554e88a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.241,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:06:10.474: INFO: Pod "webserver-deployment-595b5b9587-cq27p" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-cq27p webserver-deployment-595b5b9587- deployment-7229 /api/v1/namespaces/deployment-7229/pods/webserver-deployment-595b5b9587-cq27p c42ff461-0518-4607-82a6-ab62c144a942 10235402 0 2020-04-22 22:06:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 293189c8-5ba2-438e-9906-ced6b08e4570 0xc0049360a7 0xc0049360a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lbvl5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lbvl5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lbvl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:06:10.474: INFO: Pod "webserver-deployment-595b5b9587-cqm68" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-cqm68 webserver-deployment-595b5b9587- deployment-7229 /api/v1/namespaces/deployment-7229/pods/webserver-deployment-595b5b9587-cqm68 29ebf22b-75c1-42b8-882f-9aacdc75b796 10235250 0 2020-04-22 22:05:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 293189c8-5ba2-438e-9906-ced6b08e4570 0xc0049361c7 0xc0049361c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lbvl5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lbvl5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lbvl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:05:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:05:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.240,StartTime:2020-04-22 22:05:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-22 22:06:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4ff1051fa15850110ab0c0f1353c38e11edfd50f59d2a36b843f97b2cb63777a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.240,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:06:10.475: INFO: Pod "webserver-deployment-595b5b9587-d62pj" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-d62pj webserver-deployment-595b5b9587- deployment-7229 /api/v1/namespaces/deployment-7229/pods/webserver-deployment-595b5b9587-d62pj da5e3727-d7de-4ad1-a1bf-5066b19293a7 10235414 0 2020-04-22 22:06:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 293189c8-5ba2-438e-9906-ced6b08e4570 0xc004936357 0xc004936358}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lbvl5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lbvl5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lbvl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:06:10.475: INFO: Pod "webserver-deployment-595b5b9587-dgx5v" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dgx5v webserver-deployment-595b5b9587- deployment-7229 /api/v1/namespaces/deployment-7229/pods/webserver-deployment-595b5b9587-dgx5v 4bc6e1d5-27f7-498a-b0c7-2286a2851e83 10235238 0 2020-04-22 22:05:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 293189c8-5ba2-438e-9906-ced6b08e4570 0xc004936497 0xc004936498}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lbvl5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lbvl5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lbvl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:05:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:05:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.129,StartTime:2020-04-22 22:05:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-22 22:06:01 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ccb72e7cd82794335d711992a983588b4845ecb531bc9c21e22e15b0dbf7126b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.129,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:06:10.475: INFO: Pod "webserver-deployment-595b5b9587-f5vkd" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-f5vkd webserver-deployment-595b5b9587- deployment-7229 /api/v1/namespaces/deployment-7229/pods/webserver-deployment-595b5b9587-f5vkd 0ff4a2d6-e11a-42a7-83cb-6957ae03833e 10235289 0 2020-04-22 22:05:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 293189c8-5ba2-438e-9906-ced6b08e4570 0xc004936617 0xc004936618}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lbvl5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lbvl5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lbvl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:05:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:05:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.131,StartTime:2020-04-22 22:05:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-22 22:06:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ea82f6c1aec297a7ac01b343ce8ad3b316c821c43f904a8dc4144b2a372ba105,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.131,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:06:10.475: INFO: Pod "webserver-deployment-595b5b9587-g8nbr" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-g8nbr webserver-deployment-595b5b9587- deployment-7229 /api/v1/namespaces/deployment-7229/pods/webserver-deployment-595b5b9587-g8nbr 6be64013-8d24-4887-8188-305241570dea 10235409 0 2020-04-22 22:06:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 293189c8-5ba2-438e-9906-ced6b08e4570 0xc004936797 0xc004936798}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lbvl5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lbvl5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lbvl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:06:10.475: INFO: Pod "webserver-deployment-595b5b9587-gqqtx" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-gqqtx webserver-deployment-595b5b9587- deployment-7229 /api/v1/namespaces/deployment-7229/pods/webserver-deployment-595b5b9587-gqqtx 5479a076-8742-44a9-aa7f-9860ffcbe1ba 10235381 0 2020-04-22 22:06:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 293189c8-5ba2-438e-9906-ced6b08e4570 0xc0049368b7 0xc0049368b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lbvl5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lbvl5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lbvl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:06:10.475: INFO: Pod "webserver-deployment-595b5b9587-jfm27" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jfm27 webserver-deployment-595b5b9587- deployment-7229 /api/v1/namespaces/deployment-7229/pods/webserver-deployment-595b5b9587-jfm27 154405a7-02e1-4138-a995-e08764e15a04 10235410 0 2020-04-22 22:06:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 293189c8-5ba2-438e-9906-ced6b08e4570 0xc0049369d7 0xc0049369d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lbvl5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lbvl5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lbvl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:06:10.476: INFO: Pod "webserver-deployment-595b5b9587-m6kjh" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-m6kjh webserver-deployment-595b5b9587- deployment-7229 /api/v1/namespaces/deployment-7229/pods/webserver-deployment-595b5b9587-m6kjh c043c0b7-ef8a-4140-b640-fee68b2547b9 10235283 0 2020-04-22 22:05:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 293189c8-5ba2-438e-9906-ced6b08e4570 0xc004936af7 0xc004936af8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lbvl5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lbvl5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lbvl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:05:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:05:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.132,StartTime:2020-04-22 22:05:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-22 22:06:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5e23a1a66baf4bade83b41534d241be919d5409a255186e1fc084a744d25f66d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.132,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:06:10.476: INFO: Pod "webserver-deployment-595b5b9587-pqlpt" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-pqlpt webserver-deployment-595b5b9587- deployment-7229 /api/v1/namespaces/deployment-7229/pods/webserver-deployment-595b5b9587-pqlpt 5c4952bd-a69d-4267-8b67-daf05e4a3f2b 10235291 0 2020-04-22 22:05:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 293189c8-5ba2-438e-9906-ced6b08e4570 0xc004936c77 0xc004936c78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lbvl5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lbvl5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lbvl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:05:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:05:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.130,StartTime:2020-04-22 22:05:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-22 22:06:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://263e585dcaca5f8f7f9ed448424f530b4b8e951124abc8706a33b94f5bc28751,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.130,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:06:10.476: INFO: Pod "webserver-deployment-595b5b9587-rzbk9" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rzbk9 webserver-deployment-595b5b9587- deployment-7229 /api/v1/namespaces/deployment-7229/pods/webserver-deployment-595b5b9587-rzbk9 cc90e71e-28cd-4ada-b90e-6473f630ff33 10235401 0 2020-04-22 22:06:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 293189c8-5ba2-438e-9906-ced6b08e4570 0xc004936df7 0xc004936df8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lbvl5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lbvl5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lbvl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:06:10.476: INFO: Pod "webserver-deployment-595b5b9587-v47gx" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-v47gx webserver-deployment-595b5b9587- deployment-7229 /api/v1/namespaces/deployment-7229/pods/webserver-deployment-595b5b9587-v47gx 8abe7798-1755-4c94-b274-576e4e184af8 10235406 0 2020-04-22 22:06:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 293189c8-5ba2-438e-9906-ced6b08e4570 0xc004936f27 0xc004936f28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lbvl5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lbvl5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lbvl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:06:10.476: INFO: Pod "webserver-deployment-595b5b9587-vf48m" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vf48m webserver-deployment-595b5b9587- deployment-7229 /api/v1/namespaces/deployment-7229/pods/webserver-deployment-595b5b9587-vf48m 4a78db8d-c098-45a6-adcd-82ec87bc154c 10235415 0 2020-04-22 22:06:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 293189c8-5ba2-438e-9906-ced6b08e4570 0xc004937057 0xc004937058}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lbvl5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lbvl5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lbvl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-22 22:06:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:06:10.476: INFO: Pod "webserver-deployment-c7997dcc8-2ng4h" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-2ng4h webserver-deployment-c7997dcc8- deployment-7229 /api/v1/namespaces/deployment-7229/pods/webserver-deployment-c7997dcc8-2ng4h ea0cbc11-794e-40bc-a702-801aa78bc138 10235404 0 2020-04-22 22:06:10 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6a774aa8-400d-4102-a8bd-42782a4df4a2 0xc0049371c7 0xc0049371c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lbvl5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lbvl5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lbvl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:06:10.477: INFO: Pod "webserver-deployment-c7997dcc8-6j26s" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6j26s webserver-deployment-c7997dcc8- deployment-7229 /api/v1/namespaces/deployment-7229/pods/webserver-deployment-c7997dcc8-6j26s be6ee835-0cff-4159-a5ff-02ebb1bd87a5 10235387 0 2020-04-22 22:06:10 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6a774aa8-400d-4102-a8bd-42782a4df4a2 0xc004937307 0xc004937308}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lbvl5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lbvl5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lbvl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:06:10.477: INFO: Pod "webserver-deployment-c7997dcc8-b7jng" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-b7jng webserver-deployment-c7997dcc8- deployment-7229 /api/v1/namespaces/deployment-7229/pods/webserver-deployment-c7997dcc8-b7jng f4a6c9bf-c898-4d06-8081-b1c582106767 10235341 0 2020-04-22 22:06:08 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6a774aa8-400d-4102-a8bd-42782a4df4a2 0xc004937437 0xc004937438}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lbvl5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lbvl5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lbvl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-22 22:06:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:06:10.477: INFO: Pod "webserver-deployment-c7997dcc8-bnt5t" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-bnt5t webserver-deployment-c7997dcc8- deployment-7229 /api/v1/namespaces/deployment-7229/pods/webserver-deployment-c7997dcc8-bnt5t c0a7a725-60b6-4569-a795-9fac24703ffe 10235419 0 2020-04-22 22:06:10 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6a774aa8-400d-4102-a8bd-42782a4df4a2 0xc0049375e7 0xc0049375e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lbvl5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lbvl5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lbvl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:06:10.477: INFO: Pod "webserver-deployment-c7997dcc8-bpkvq" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-bpkvq webserver-deployment-c7997dcc8- deployment-7229 /api/v1/namespaces/deployment-7229/pods/webserver-deployment-c7997dcc8-bpkvq e915b4f5-b7b5-45a3-b6fa-810ab6e328f0 10235331 0 2020-04-22 22:06:08 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6a774aa8-400d-4102-a8bd-42782a4df4a2 0xc004937717 0xc004937718}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lbvl5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lbvl5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lbvl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-22 22:06:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:06:10.477: INFO: Pod "webserver-deployment-c7997dcc8-dshrl" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dshrl webserver-deployment-c7997dcc8- deployment-7229 /api/v1/namespaces/deployment-7229/pods/webserver-deployment-c7997dcc8-dshrl 46d8d0a6-d8f4-481e-b3e7-6df4515d5b7e 10235352 0 2020-04-22 22:06:08 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6a774aa8-400d-4102-a8bd-42782a4df4a2 0xc004937897 0xc004937898}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lbvl5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lbvl5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lbvl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-22 22:06:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:06:10.477: INFO: Pod "webserver-deployment-c7997dcc8-gwvmh" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-gwvmh webserver-deployment-c7997dcc8- deployment-7229 /api/v1/namespaces/deployment-7229/pods/webserver-deployment-c7997dcc8-gwvmh d9baab45-3449-4b7f-a6ce-ecdd672074f0 10235394 0 2020-04-22 22:06:10 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6a774aa8-400d-4102-a8bd-42782a4df4a2 0xc004937a17 0xc004937a18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lbvl5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lbvl5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lbvl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:06:10.478: INFO: Pod "webserver-deployment-c7997dcc8-m4v2n" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-m4v2n webserver-deployment-c7997dcc8- deployment-7229 /api/v1/namespaces/deployment-7229/pods/webserver-deployment-c7997dcc8-m4v2n a265d68d-afd0-418c-94b5-c844697a9938 10235354 0 2020-04-22 22:06:08 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6a774aa8-400d-4102-a8bd-42782a4df4a2 0xc004937b47 0xc004937b48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lbvl5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lbvl5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lbvl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-22 22:06:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:06:10.478: INFO: Pod "webserver-deployment-c7997dcc8-mntwm" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-mntwm webserver-deployment-c7997dcc8- deployment-7229 /api/v1/namespaces/deployment-7229/pods/webserver-deployment-c7997dcc8-mntwm 30068020-265c-4470-849f-1bc41168aeb9 10235327 0 2020-04-22 22:06:08 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6a774aa8-400d-4102-a8bd-42782a4df4a2 0xc004937cc7 0xc004937cc8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lbvl5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lbvl5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lbvl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-22 22:06:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:06:10.478: INFO: Pod "webserver-deployment-c7997dcc8-ppvcq" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ppvcq webserver-deployment-c7997dcc8- deployment-7229 /api/v1/namespaces/deployment-7229/pods/webserver-deployment-c7997dcc8-ppvcq 073748c0-f7af-4206-b3bf-8b90b08df5a1 10235405 0 2020-04-22 22:06:10 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6a774aa8-400d-4102-a8bd-42782a4df4a2 0xc004937e47 0xc004937e48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lbvl5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lbvl5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lbvl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:06:10.478: INFO: Pod "webserver-deployment-c7997dcc8-ps2mv" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ps2mv webserver-deployment-c7997dcc8- deployment-7229 /api/v1/namespaces/deployment-7229/pods/webserver-deployment-c7997dcc8-ps2mv f2ff17d2-0d46-4c0c-ae87-79c317824ae5 10235411 0 2020-04-22 22:06:10 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6a774aa8-400d-4102-a8bd-42782a4df4a2 0xc004937f77 0xc004937f78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lbvl5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lbvl5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lbvl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:06:10.478: INFO: Pod "webserver-deployment-c7997dcc8-tzp2c" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tzp2c webserver-deployment-c7997dcc8- deployment-7229 /api/v1/namespaces/deployment-7229/pods/webserver-deployment-c7997dcc8-tzp2c 231802e1-eb95-4843-895e-3d04de87f715 10235412 0 2020-04-22 22:06:10 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6a774aa8-400d-4102-a8bd-42782a4df4a2 0xc002f040a7 0xc002f040a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lbvl5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lbvl5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lbvl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:06:10.479: INFO: Pod "webserver-deployment-c7997dcc8-vst9t" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-vst9t webserver-deployment-c7997dcc8- deployment-7229 /api/v1/namespaces/deployment-7229/pods/webserver-deployment-c7997dcc8-vst9t 0d830b75-7423-4f99-8cb7-342bc3b8e1f6 10235426 0 2020-04-22 22:06:10 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6a774aa8-400d-4102-a8bd-42782a4df4a2 0xc002f041d7 0xc002f041d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lbvl5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lbvl5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lbvl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 22:06:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-22 22:06:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:06:10.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7229" for this suite. • [SLOW TEST:14.849 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":207,"skipped":3330,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:06:10.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-bcd2301c-1f10-48a5-9d08-832b080e6b37 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:06:27.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4644" for this suite. • [SLOW TEST:17.151 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3333,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:06:27.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 22 22:06:39.161: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:06:39.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8869" for this suite. • [SLOW TEST:12.080 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":209,"skipped":3344,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:06:39.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 22 22:06:41.767: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 22 22:06:43.898: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723190001, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723190001, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723190001, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723190001, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 22:06:45.912: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723190001, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723190001, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723190001, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723190001, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 22 22:06:48.928: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:06:49.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3262" for this suite. STEP: Destroying namespace "webhook-3262-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.335 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":210,"skipped":3374,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:06:49.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 22 22:06:49.196: INFO: Creating ReplicaSet my-hostname-basic-45250879-de3c-4375-b519-b63c70c6b02f Apr 22 22:06:49.211: INFO: Pod name my-hostname-basic-45250879-de3c-4375-b519-b63c70c6b02f: Found 0 pods out of 1 Apr 22 22:06:54.219: INFO: Pod name my-hostname-basic-45250879-de3c-4375-b519-b63c70c6b02f: Found 1 pods out of 1 Apr 22 22:06:54.219: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-45250879-de3c-4375-b519-b63c70c6b02f" is running Apr 22 22:06:54.221: INFO: Pod "my-hostname-basic-45250879-de3c-4375-b519-b63c70c6b02f-2dhpt" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-22 22:06:49 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-22 22:06:52 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-22 22:06:52 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-22 22:06:49 +0000 UTC Reason: Message:}]) Apr 22 22:06:54.221: INFO: Trying to dial the pod Apr 22 22:06:59.233: INFO: Controller my-hostname-basic-45250879-de3c-4375-b519-b63c70c6b02f: Got expected result from replica 1 [my-hostname-basic-45250879-de3c-4375-b519-b63c70c6b02f-2dhpt]: "my-hostname-basic-45250879-de3c-4375-b519-b63c70c6b02f-2dhpt", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:06:59.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6873" for this suite. • [SLOW TEST:10.075 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":211,"skipped":3448,"failed":0} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:06:59.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 22 22:06:59.321: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:06:59.330: INFO: Number of nodes with available pods: 0 Apr 22 22:06:59.330: INFO: Node jerma-worker is running more than one daemon pod Apr 22 22:07:00.336: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:07:00.340: INFO: Number of nodes with available pods: 0 Apr 22 22:07:00.340: INFO: Node jerma-worker is running more than one daemon pod Apr 22 22:07:01.338: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:07:01.341: INFO: Number of nodes with available pods: 0 Apr 22 22:07:01.341: INFO: Node jerma-worker is running more than one daemon pod Apr 22 22:07:02.336: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:07:02.340: INFO: Number of nodes with available pods: 0 Apr 22 22:07:02.340: INFO: Node jerma-worker is running more than one daemon pod Apr 22 22:07:03.336: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:07:03.340: INFO: Number of nodes with available pods: 1 Apr 22 22:07:03.340: INFO: Node jerma-worker2 is running more than one daemon pod Apr 22 22:07:04.356: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:07:04.359: INFO: Number of nodes with available pods: 2 Apr 22 22:07:04.359: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 22 22:07:04.427: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:07:04.442: INFO: Number of nodes with available pods: 1 Apr 22 22:07:04.442: INFO: Node jerma-worker2 is running more than one daemon pod Apr 22 22:07:05.447: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:07:05.450: INFO: Number of nodes with available pods: 1 Apr 22 22:07:05.450: INFO: Node jerma-worker2 is running more than one daemon pod Apr 22 22:07:06.615: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:07:06.626: INFO: Number of nodes with available pods: 1 Apr 22 22:07:06.626: INFO: Node jerma-worker2 is running more than one daemon pod Apr 22 22:07:07.446: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:07:07.449: INFO: Number of nodes with available pods: 1 Apr 22 22:07:07.449: INFO: Node jerma-worker2 is running more than one daemon pod Apr 22 22:07:08.467: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:07:08.470: INFO: Number of nodes with available pods: 2 Apr 22 22:07:08.470: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1428, will wait for the garbage collector to delete the pods Apr 22 22:07:08.534: INFO: Deleting DaemonSet.extensions daemon-set took: 6.583797ms Apr 22 22:07:08.634: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.224824ms Apr 22 22:07:19.538: INFO: Number of nodes with available pods: 0 Apr 22 22:07:19.538: INFO: Number of running nodes: 0, number of available pods: 0 Apr 22 22:07:19.540: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1428/daemonsets","resourceVersion":"10236150"},"items":null} Apr 22 22:07:19.543: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1428/pods","resourceVersion":"10236150"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:07:19.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1428" for this suite. • [SLOW TEST:20.320 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":212,"skipped":3452,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:07:19.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-9031 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-9031 I0422 22:07:19.824661 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-9031, replica count: 2 I0422 22:07:22.875044 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 22:07:25.875252 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 22 22:07:25.875: INFO: Creating new exec pod Apr 22 22:07:30.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9031 execpodclkn8 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 22 22:07:31.136: INFO: stderr: "I0422 22:07:31.045018 2515 log.go:172] (0xc0000f4a50) (0xc000b34000) Create stream\nI0422 22:07:31.045091 2515 log.go:172] (0xc0000f4a50) (0xc000b34000) Stream added, broadcasting: 1\nI0422 22:07:31.048061 2515 log.go:172] (0xc0000f4a50) Reply frame received for 1\nI0422 22:07:31.048092 2515 log.go:172] (0xc0000f4a50) (0xc0007b2000) Create stream\nI0422 22:07:31.048101 2515 log.go:172] (0xc0000f4a50) (0xc0007b2000) Stream added, broadcasting: 3\nI0422 22:07:31.049283 2515 log.go:172] (0xc0000f4a50) Reply frame received for 3\nI0422 22:07:31.049316 2515 log.go:172] (0xc0000f4a50) (0xc000b340a0) Create stream\nI0422 22:07:31.049327 2515 log.go:172] (0xc0000f4a50) (0xc000b340a0) Stream added, broadcasting: 5\nI0422 22:07:31.050358 2515 log.go:172] (0xc0000f4a50) Reply frame received for 5\nI0422 22:07:31.128458 2515 log.go:172] (0xc0000f4a50) Data frame received for 5\nI0422 22:07:31.128514 2515 log.go:172] (0xc000b340a0) (5) Data frame handling\nI0422 22:07:31.128544 2515 log.go:172] (0xc000b340a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0422 22:07:31.128924 2515 log.go:172] (0xc0000f4a50) Data frame received for 5\nI0422 22:07:31.128968 2515 log.go:172] (0xc000b340a0) (5) Data frame handling\nI0422 22:07:31.128996 2515 log.go:172] (0xc000b340a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0422 22:07:31.129358 2515 log.go:172] (0xc0000f4a50) Data frame received for 3\nI0422 22:07:31.129392 2515 log.go:172] (0xc0007b2000) (3) Data frame handling\nI0422 22:07:31.129516 2515 log.go:172] (0xc0000f4a50) Data frame received for 5\nI0422 22:07:31.129558 2515 log.go:172] (0xc000b340a0) (5) Data frame handling\nI0422 22:07:31.131494 2515 log.go:172] (0xc0000f4a50) Data frame received for 1\nI0422 22:07:31.131583 2515 log.go:172] (0xc000b34000) (1) Data frame handling\nI0422 22:07:31.131678 2515 log.go:172] (0xc000b34000) (1) Data frame sent\nI0422 22:07:31.131715 2515 log.go:172] (0xc0000f4a50) (0xc000b34000) Stream removed, broadcasting: 1\nI0422 22:07:31.131745 2515 log.go:172] (0xc0000f4a50) Go away received\nI0422 22:07:31.131999 2515 log.go:172] (0xc0000f4a50) (0xc000b34000) Stream removed, broadcasting: 1\nI0422 22:07:31.132011 2515 log.go:172] (0xc0000f4a50) (0xc0007b2000) Stream removed, broadcasting: 3\nI0422 22:07:31.132017 2515 log.go:172] (0xc0000f4a50) (0xc000b340a0) Stream removed, broadcasting: 5\n" Apr 22 22:07:31.136: INFO: stdout: "" Apr 22 22:07:31.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9031 execpodclkn8 -- /bin/sh -x -c nc -zv -t -w 2 10.108.195.227 80' Apr 22 22:07:31.341: INFO: stderr: "I0422 22:07:31.268342 2537 log.go:172] (0xc00099c9a0) (0xc000611a40) Create stream\nI0422 22:07:31.268412 2537 log.go:172] (0xc00099c9a0) (0xc000611a40) Stream added, broadcasting: 1\nI0422 22:07:31.271918 2537 log.go:172] (0xc00099c9a0) Reply frame received for 1\nI0422 22:07:31.271947 2537 log.go:172] (0xc00099c9a0) (0xc00095e000) Create stream\nI0422 22:07:31.271955 2537 log.go:172] (0xc00099c9a0) (0xc00095e000) Stream added, broadcasting: 3\nI0422 22:07:31.272621 2537 log.go:172] (0xc00099c9a0) Reply frame received for 3\nI0422 22:07:31.272653 2537 log.go:172] (0xc00099c9a0) (0xc00071e000) Create stream\nI0422 22:07:31.272666 2537 log.go:172] (0xc00099c9a0) (0xc00071e000) Stream added, broadcasting: 5\nI0422 22:07:31.273659 2537 log.go:172] (0xc00099c9a0) Reply frame received for 5\nI0422 22:07:31.335411 2537 log.go:172] (0xc00099c9a0) Data frame received for 3\nI0422 22:07:31.335444 2537 log.go:172] (0xc00095e000) (3) Data frame handling\nI0422 22:07:31.335472 2537 log.go:172] (0xc00099c9a0) Data frame received for 5\nI0422 22:07:31.335488 2537 log.go:172] (0xc00071e000) (5) Data frame handling\nI0422 22:07:31.335504 2537 log.go:172] (0xc00071e000) (5) Data frame sent\nI0422 22:07:31.335519 2537 log.go:172] (0xc00099c9a0) Data frame received for 5\nI0422 22:07:31.335530 2537 log.go:172] (0xc00071e000) (5) Data frame handling\n+ nc -zv -t -w 2 10.108.195.227 80\nConnection to 10.108.195.227 80 port [tcp/http] succeeded!\nI0422 22:07:31.337097 2537 log.go:172] (0xc00099c9a0) Data frame received for 1\nI0422 22:07:31.337277 2537 log.go:172] (0xc000611a40) (1) Data frame handling\nI0422 22:07:31.337296 2537 log.go:172] (0xc000611a40) (1) Data frame sent\nI0422 22:07:31.337333 2537 log.go:172] (0xc00099c9a0) (0xc000611a40) Stream removed, broadcasting: 1\nI0422 22:07:31.337363 2537 log.go:172] (0xc00099c9a0) Go away received\nI0422 22:07:31.337684 2537 log.go:172] (0xc00099c9a0) (0xc000611a40) Stream removed, broadcasting: 1\nI0422 22:07:31.337702 2537 log.go:172] (0xc00099c9a0) (0xc00095e000) Stream removed, broadcasting: 3\nI0422 22:07:31.337709 2537 log.go:172] (0xc00099c9a0) (0xc00071e000) Stream removed, broadcasting: 5\n" Apr 22 22:07:31.341: INFO: stdout: "" Apr 22 22:07:31.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9031 execpodclkn8 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 30853' Apr 22 22:07:31.568: INFO: stderr: "I0422 22:07:31.479784 2557 log.go:172] (0xc000684d10) (0xc000625ae0) Create stream\nI0422 22:07:31.479845 2557 log.go:172] (0xc000684d10) (0xc000625ae0) Stream added, broadcasting: 1\nI0422 22:07:31.482966 2557 log.go:172] (0xc000684d10) Reply frame received for 1\nI0422 22:07:31.488620 2557 log.go:172] (0xc000684d10) (0xc0007ca000) Create stream\nI0422 22:07:31.488649 2557 log.go:172] (0xc000684d10) (0xc0007ca000) Stream added, broadcasting: 3\nI0422 22:07:31.489952 2557 log.go:172] (0xc000684d10) Reply frame received for 3\nI0422 22:07:31.489983 2557 log.go:172] (0xc000684d10) (0xc000625b80) Create stream\nI0422 22:07:31.490001 2557 log.go:172] (0xc000684d10) (0xc000625b80) Stream added, broadcasting: 5\nI0422 22:07:31.490854 2557 log.go:172] (0xc000684d10) Reply frame received for 5\nI0422 22:07:31.561358 2557 log.go:172] (0xc000684d10) Data frame received for 5\nI0422 22:07:31.561401 2557 log.go:172] (0xc000625b80) (5) Data frame handling\nI0422 22:07:31.561436 2557 log.go:172] (0xc000625b80) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.10 30853\nConnection to 172.17.0.10 30853 port [tcp/30853] succeeded!\nI0422 22:07:31.561723 2557 log.go:172] (0xc000684d10) Data frame received for 5\nI0422 22:07:31.561758 2557 log.go:172] (0xc000625b80) (5) Data frame handling\nI0422 22:07:31.561951 2557 log.go:172] (0xc000684d10) Data frame received for 3\nI0422 22:07:31.561975 2557 log.go:172] (0xc0007ca000) (3) Data frame handling\nI0422 22:07:31.564038 2557 log.go:172] (0xc000684d10) Data frame received for 1\nI0422 22:07:31.564075 2557 log.go:172] (0xc000625ae0) (1) Data frame handling\nI0422 22:07:31.564098 2557 log.go:172] (0xc000625ae0) (1) Data frame sent\nI0422 22:07:31.564141 2557 log.go:172] (0xc000684d10) (0xc000625ae0) Stream removed, broadcasting: 1\nI0422 22:07:31.564176 2557 log.go:172] (0xc000684d10) Go away received\nI0422 22:07:31.564508 2557 log.go:172] (0xc000684d10) (0xc000625ae0) Stream removed, broadcasting: 1\nI0422 22:07:31.564531 2557 log.go:172] (0xc000684d10) (0xc0007ca000) Stream removed, broadcasting: 3\nI0422 22:07:31.564540 2557 log.go:172] (0xc000684d10) (0xc000625b80) Stream removed, broadcasting: 5\n" Apr 22 22:07:31.568: INFO: stdout: "" Apr 22 22:07:31.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9031 execpodclkn8 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 30853' Apr 22 22:07:31.789: INFO: stderr: "I0422 22:07:31.699515 2579 log.go:172] (0xc00063c790) (0xc00061e1e0) Create stream\nI0422 22:07:31.699585 2579 log.go:172] (0xc00063c790) (0xc00061e1e0) Stream added, broadcasting: 1\nI0422 22:07:31.702596 2579 log.go:172] (0xc00063c790) Reply frame received for 1\nI0422 22:07:31.702660 2579 log.go:172] (0xc00063c790) (0xc000513040) Create stream\nI0422 22:07:31.702685 2579 log.go:172] (0xc00063c790) (0xc000513040) Stream added, broadcasting: 3\nI0422 22:07:31.703926 2579 log.go:172] (0xc00063c790) Reply frame received for 3\nI0422 22:07:31.703957 2579 log.go:172] (0xc00063c790) (0xc00061e320) Create stream\nI0422 22:07:31.703968 2579 log.go:172] (0xc00063c790) (0xc00061e320) Stream added, broadcasting: 5\nI0422 22:07:31.705390 2579 log.go:172] (0xc00063c790) Reply frame received for 5\nI0422 22:07:31.782088 2579 log.go:172] (0xc00063c790) Data frame received for 3\nI0422 22:07:31.782124 2579 log.go:172] (0xc000513040) (3) Data frame handling\nI0422 22:07:31.782183 2579 log.go:172] (0xc00063c790) Data frame received for 5\nI0422 22:07:31.782235 2579 log.go:172] (0xc00061e320) (5) Data frame handling\nI0422 22:07:31.782274 2579 log.go:172] (0xc00061e320) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.8 30853\nConnection to 172.17.0.8 30853 port [tcp/30853] succeeded!\nI0422 22:07:31.782548 2579 log.go:172] (0xc00063c790) Data frame received for 5\nI0422 22:07:31.782633 2579 log.go:172] (0xc00061e320) (5) Data frame handling\nI0422 22:07:31.784253 2579 log.go:172] (0xc00063c790) Data frame received for 1\nI0422 22:07:31.784278 2579 log.go:172] (0xc00061e1e0) (1) Data frame handling\nI0422 22:07:31.784289 2579 log.go:172] (0xc00061e1e0) (1) Data frame sent\nI0422 22:07:31.784351 2579 log.go:172] (0xc00063c790) (0xc00061e1e0) Stream removed, broadcasting: 1\nI0422 22:07:31.784422 2579 log.go:172] (0xc00063c790) Go away received\nI0422 22:07:31.784724 2579 log.go:172] (0xc00063c790) (0xc00061e1e0) Stream removed, broadcasting: 1\nI0422 22:07:31.784737 2579 log.go:172] (0xc00063c790) (0xc000513040) Stream removed, broadcasting: 3\nI0422 22:07:31.784742 2579 log.go:172] (0xc00063c790) (0xc00061e320) Stream removed, broadcasting: 5\n" Apr 22 22:07:31.789: INFO: stdout: "" Apr 22 22:07:31.789: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:07:31.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9031" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.285 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":213,"skipped":3468,"failed":0} SSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:07:31.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Apr 22 22:07:31.945: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3376 /api/v1/namespaces/watch-3376/configmaps/e2e-watch-test-watch-closed 44f09e78-7bc3-45cd-98bb-7c8be4593a66 10236263 0 2020-04-22 22:07:31 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 22 22:07:31.945: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3376 /api/v1/namespaces/watch-3376/configmaps/e2e-watch-test-watch-closed 44f09e78-7bc3-45cd-98bb-7c8be4593a66 10236264 0 2020-04-22 22:07:31 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 22 22:07:31.963: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3376 /api/v1/namespaces/watch-3376/configmaps/e2e-watch-test-watch-closed 44f09e78-7bc3-45cd-98bb-7c8be4593a66 10236265 0 2020-04-22 22:07:31 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 22 22:07:31.963: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3376 /api/v1/namespaces/watch-3376/configmaps/e2e-watch-test-watch-closed 44f09e78-7bc3-45cd-98bb-7c8be4593a66 10236266 0 2020-04-22 22:07:31 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:07:31.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3376" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":214,"skipped":3471,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:07:31.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Apr 22 22:07:32.037: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:07:49.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3505" for this suite. • [SLOW TEST:17.371 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":215,"skipped":3488,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:07:49.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Apr 22 22:07:49.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8407' Apr 22 22:07:49.714: INFO: stderr: "" Apr 22 22:07:49.714: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 22 22:07:49.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8407' Apr 22 22:07:49.840: INFO: stderr: "" Apr 22 22:07:49.840: INFO: stdout: "update-demo-nautilus-5jc68 update-demo-nautilus-kxpzp " Apr 22 22:07:49.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5jc68 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8407' Apr 22 22:07:49.940: INFO: stderr: "" Apr 22 22:07:49.940: INFO: stdout: "" Apr 22 22:07:49.940: INFO: update-demo-nautilus-5jc68 is created but not running Apr 22 22:07:54.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8407' Apr 22 22:07:55.069: INFO: stderr: "" Apr 22 22:07:55.069: INFO: stdout: "update-demo-nautilus-5jc68 update-demo-nautilus-kxpzp " Apr 22 22:07:55.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5jc68 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8407' Apr 22 22:07:55.166: INFO: stderr: "" Apr 22 22:07:55.166: INFO: stdout: "true" Apr 22 22:07:55.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5jc68 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8407' Apr 22 22:07:55.272: INFO: stderr: "" Apr 22 22:07:55.272: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 22 22:07:55.272: INFO: validating pod update-demo-nautilus-5jc68 Apr 22 22:07:55.276: INFO: got data: { "image": "nautilus.jpg" } Apr 22 22:07:55.276: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 22 22:07:55.276: INFO: update-demo-nautilus-5jc68 is verified up and running Apr 22 22:07:55.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kxpzp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8407' Apr 22 22:07:55.381: INFO: stderr: "" Apr 22 22:07:55.381: INFO: stdout: "true" Apr 22 22:07:55.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kxpzp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8407' Apr 22 22:07:55.478: INFO: stderr: "" Apr 22 22:07:55.478: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 22 22:07:55.478: INFO: validating pod update-demo-nautilus-kxpzp Apr 22 22:07:55.482: INFO: got data: { "image": "nautilus.jpg" } Apr 22 22:07:55.482: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 22 22:07:55.482: INFO: update-demo-nautilus-kxpzp is verified up and running STEP: scaling down the replication controller Apr 22 22:07:55.485: INFO: scanned /root for discovery docs: Apr 22 22:07:55.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-8407' Apr 22 22:07:56.607: INFO: stderr: "" Apr 22 22:07:56.607: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 22 22:07:56.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8407' Apr 22 22:07:56.708: INFO: stderr: "" Apr 22 22:07:56.708: INFO: stdout: "update-demo-nautilus-5jc68 update-demo-nautilus-kxpzp " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 22 22:08:01.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8407' Apr 22 22:08:01.818: INFO: stderr: "" Apr 22 22:08:01.818: INFO: stdout: "update-demo-nautilus-kxpzp " Apr 22 22:08:01.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kxpzp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8407' Apr 22 22:08:01.913: INFO: stderr: "" Apr 22 22:08:01.913: INFO: stdout: "true" Apr 22 22:08:01.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kxpzp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8407' Apr 22 22:08:02.006: INFO: stderr: "" Apr 22 22:08:02.006: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 22 22:08:02.006: INFO: validating pod update-demo-nautilus-kxpzp Apr 22 22:08:02.010: INFO: got data: { "image": "nautilus.jpg" } Apr 22 22:08:02.010: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 22 22:08:02.010: INFO: update-demo-nautilus-kxpzp is verified up and running STEP: scaling up the replication controller Apr 22 22:08:02.012: INFO: scanned /root for discovery docs: Apr 22 22:08:02.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-8407' Apr 22 22:08:03.179: INFO: stderr: "" Apr 22 22:08:03.179: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 22 22:08:03.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8407' Apr 22 22:08:03.269: INFO: stderr: "" Apr 22 22:08:03.269: INFO: stdout: "update-demo-nautilus-kxpzp update-demo-nautilus-q7dgm " Apr 22 22:08:03.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kxpzp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8407' Apr 22 22:08:03.363: INFO: stderr: "" Apr 22 22:08:03.363: INFO: stdout: "true" Apr 22 22:08:03.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kxpzp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8407' Apr 22 22:08:03.458: INFO: stderr: "" Apr 22 22:08:03.458: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 22 22:08:03.459: INFO: validating pod update-demo-nautilus-kxpzp Apr 22 22:08:03.461: INFO: got data: { "image": "nautilus.jpg" } Apr 22 22:08:03.461: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 22 22:08:03.461: INFO: update-demo-nautilus-kxpzp is verified up and running Apr 22 22:08:03.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q7dgm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8407' Apr 22 22:08:03.642: INFO: stderr: "" Apr 22 22:08:03.642: INFO: stdout: "" Apr 22 22:08:03.642: INFO: update-demo-nautilus-q7dgm is created but not running Apr 22 22:08:08.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8407' Apr 22 22:08:08.736: INFO: stderr: "" Apr 22 22:08:08.736: INFO: stdout: "update-demo-nautilus-kxpzp update-demo-nautilus-q7dgm " Apr 22 22:08:08.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kxpzp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8407' Apr 22 22:08:08.845: INFO: stderr: "" Apr 22 22:08:08.845: INFO: stdout: "true" Apr 22 22:08:08.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kxpzp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8407' Apr 22 22:08:08.937: INFO: stderr: "" Apr 22 22:08:08.937: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 22 22:08:08.938: INFO: validating pod update-demo-nautilus-kxpzp Apr 22 22:08:08.941: INFO: got data: { "image": "nautilus.jpg" } Apr 22 22:08:08.941: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 22 22:08:08.941: INFO: update-demo-nautilus-kxpzp is verified up and running Apr 22 22:08:08.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q7dgm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8407' Apr 22 22:08:09.048: INFO: stderr: "" Apr 22 22:08:09.048: INFO: stdout: "true" Apr 22 22:08:09.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q7dgm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8407' Apr 22 22:08:09.132: INFO: stderr: "" Apr 22 22:08:09.132: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 22 22:08:09.132: INFO: validating pod update-demo-nautilus-q7dgm Apr 22 22:08:09.135: INFO: got data: { "image": "nautilus.jpg" } Apr 22 22:08:09.135: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 22 22:08:09.135: INFO: update-demo-nautilus-q7dgm is verified up and running STEP: using delete to clean up resources Apr 22 22:08:09.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8407' Apr 22 22:08:09.241: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 22 22:08:09.241: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 22 22:08:09.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8407' Apr 22 22:08:09.328: INFO: stderr: "No resources found in kubectl-8407 namespace.\n" Apr 22 22:08:09.328: INFO: stdout: "" Apr 22 22:08:09.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8407 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 22 22:08:09.428: INFO: stderr: "" Apr 22 22:08:09.428: INFO: stdout: "update-demo-nautilus-kxpzp\nupdate-demo-nautilus-q7dgm\n" Apr 22 22:08:09.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8407' Apr 22 22:08:10.037: INFO: stderr: "No resources found in kubectl-8407 namespace.\n" Apr 22 22:08:10.037: INFO: stdout: "" Apr 22 22:08:10.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8407 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 22 22:08:10.140: INFO: stderr: "" Apr 22 22:08:10.140: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:08:10.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8407" for this suite. • [SLOW TEST:20.806 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":216,"skipped":3501,"failed":0} SSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:08:10.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0422 22:08:20.435556 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 22 22:08:20.435: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:08:20.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6813" for this suite. • [SLOW TEST:10.296 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":217,"skipped":3504,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:08:20.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-6e70fe87-0511-4600-b243-6b3c5c164bb2 STEP: Creating a pod to test consume secrets Apr 22 22:08:20.548: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-58a3122d-4e65-4c97-936d-a1610e6e386a" in namespace "projected-5410" to be "success or failure" Apr 22 22:08:20.562: INFO: Pod "pod-projected-secrets-58a3122d-4e65-4c97-936d-a1610e6e386a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.398844ms Apr 22 22:08:22.566: INFO: Pod "pod-projected-secrets-58a3122d-4e65-4c97-936d-a1610e6e386a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018194437s Apr 22 22:08:24.660: INFO: Pod "pod-projected-secrets-58a3122d-4e65-4c97-936d-a1610e6e386a": Phase="Running", Reason="", readiness=true. Elapsed: 4.111408699s Apr 22 22:08:26.663: INFO: Pod "pod-projected-secrets-58a3122d-4e65-4c97-936d-a1610e6e386a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.115150277s STEP: Saw pod success Apr 22 22:08:26.663: INFO: Pod "pod-projected-secrets-58a3122d-4e65-4c97-936d-a1610e6e386a" satisfied condition "success or failure" Apr 22 22:08:26.666: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-58a3122d-4e65-4c97-936d-a1610e6e386a container secret-volume-test: STEP: delete the pod Apr 22 22:08:26.699: INFO: Waiting for pod pod-projected-secrets-58a3122d-4e65-4c97-936d-a1610e6e386a to disappear Apr 22 22:08:26.710: INFO: Pod pod-projected-secrets-58a3122d-4e65-4c97-936d-a1610e6e386a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:08:26.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5410" for this suite. • [SLOW TEST:6.272 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":218,"skipped":3505,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:08:26.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 22 22:08:27.795: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 22 22:08:29.804: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723190107, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723190107, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723190107, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723190107, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 22 22:08:32.860: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:08:43.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6319" for this suite. STEP: Destroying namespace "webhook-6319-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.500 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":219,"skipped":3563,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:08:43.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-6388 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-6388 STEP: creating replication controller externalsvc in namespace services-6388 I0422 22:08:43.482251 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-6388, replica count: 2 I0422 22:08:46.532606 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 22:08:49.532940 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Apr 22 22:08:49.680: INFO: Creating new exec pod Apr 22 22:08:53.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6388 execpodt5w4x -- /bin/sh -x -c nslookup nodeport-service' Apr 22 22:08:54.048: INFO: stderr: "I0422 22:08:53.935402 3178 log.go:172] (0xc0000f4e70) (0xc0005f9d60) Create stream\nI0422 22:08:53.935475 3178 log.go:172] (0xc0000f4e70) (0xc0005f9d60) Stream added, broadcasting: 1\nI0422 22:08:53.939181 3178 log.go:172] (0xc0000f4e70) Reply frame received for 1\nI0422 22:08:53.939249 3178 log.go:172] (0xc0000f4e70) (0xc00044e960) Create stream\nI0422 22:08:53.939270 3178 log.go:172] (0xc0000f4e70) (0xc00044e960) Stream added, broadcasting: 3\nI0422 22:08:53.940574 3178 log.go:172] (0xc0000f4e70) Reply frame received for 3\nI0422 22:08:53.940607 3178 log.go:172] (0xc0000f4e70) (0xc000994000) Create stream\nI0422 22:08:53.940622 3178 log.go:172] (0xc0000f4e70) (0xc000994000) Stream added, broadcasting: 5\nI0422 22:08:53.941758 3178 log.go:172] (0xc0000f4e70) Reply frame received for 5\nI0422 22:08:54.028857 3178 log.go:172] (0xc0000f4e70) Data frame received for 5\nI0422 22:08:54.028890 3178 log.go:172] (0xc000994000) (5) Data frame handling\nI0422 22:08:54.028918 3178 log.go:172] (0xc000994000) (5) Data frame sent\n+ nslookup nodeport-service\nI0422 22:08:54.037835 3178 log.go:172] (0xc0000f4e70) Data frame received for 3\nI0422 22:08:54.037863 3178 log.go:172] (0xc00044e960) (3) Data frame handling\nI0422 22:08:54.037905 3178 log.go:172] (0xc00044e960) (3) Data frame sent\nI0422 22:08:54.039057 3178 log.go:172] (0xc0000f4e70) Data frame received for 3\nI0422 22:08:54.039079 3178 log.go:172] (0xc00044e960) (3) Data frame handling\nI0422 22:08:54.039096 3178 log.go:172] (0xc00044e960) (3) Data frame sent\nI0422 22:08:54.039805 3178 log.go:172] (0xc0000f4e70) Data frame received for 5\nI0422 22:08:54.039836 3178 log.go:172] (0xc000994000) (5) Data frame handling\nI0422 22:08:54.039927 3178 log.go:172] (0xc0000f4e70) Data frame received for 3\nI0422 22:08:54.039938 3178 log.go:172] (0xc00044e960) (3) Data frame handling\nI0422 22:08:54.042067 3178 log.go:172] (0xc0000f4e70) Data frame received for 1\nI0422 22:08:54.042097 3178 log.go:172] (0xc0005f9d60) (1) Data frame handling\nI0422 22:08:54.042117 3178 log.go:172] (0xc0005f9d60) (1) Data frame sent\nI0422 22:08:54.042140 3178 log.go:172] (0xc0000f4e70) (0xc0005f9d60) Stream removed, broadcasting: 1\nI0422 22:08:54.042172 3178 log.go:172] (0xc0000f4e70) Go away received\nI0422 22:08:54.042603 3178 log.go:172] (0xc0000f4e70) (0xc0005f9d60) Stream removed, broadcasting: 1\nI0422 22:08:54.042634 3178 log.go:172] (0xc0000f4e70) (0xc00044e960) Stream removed, broadcasting: 3\nI0422 22:08:54.042648 3178 log.go:172] (0xc0000f4e70) (0xc000994000) Stream removed, broadcasting: 5\n" Apr 22 22:08:54.048: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-6388.svc.cluster.local\tcanonical name = externalsvc.services-6388.svc.cluster.local.\nName:\texternalsvc.services-6388.svc.cluster.local\nAddress: 10.98.173.99\n\n" STEP: deleting ReplicationController externalsvc in namespace services-6388, will wait for the garbage collector to delete the pods Apr 22 22:08:54.123: INFO: Deleting ReplicationController externalsvc took: 7.90676ms Apr 22 22:08:54.423: INFO: Terminating ReplicationController externalsvc pods took: 300.233376ms Apr 22 22:09:09.551: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:09:09.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6388" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:26.374 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":220,"skipped":3571,"failed":0} [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:09:09.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 22 22:09:09.673: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:09:10.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3063" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":221,"skipped":3571,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:09:10.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 22 22:09:11.496: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 22 22:09:13.507: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723190151, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723190151, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723190151, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723190151, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 22:09:15.528: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723190151, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723190151, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723190151, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723190151, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 22 22:09:18.536: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:09:18.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9054" for this suite. STEP: Destroying namespace "webhook-9054-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.282 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":222,"skipped":3577,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:09:19.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Apr 22 22:09:19.305: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-7287 /api/v1/namespaces/watch-7287/configmaps/e2e-watch-test-resource-version 95dac00e-3237-43ba-a5ef-9d4f15f4bf10 10237056 0 2020-04-22 22:09:19 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 22 22:09:19.305: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-7287 /api/v1/namespaces/watch-7287/configmaps/e2e-watch-test-resource-version 95dac00e-3237-43ba-a5ef-9d4f15f4bf10 10237057 0 2020-04-22 22:09:19 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:09:19.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7287" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":223,"skipped":3579,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:09:19.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 22 22:09:27.450: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 22 22:09:27.457: INFO: Pod pod-with-prestop-exec-hook still exists Apr 22 22:09:29.457: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 22 22:09:29.462: INFO: Pod pod-with-prestop-exec-hook still exists Apr 22 22:09:31.457: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 22 22:09:31.463: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:09:31.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5993" for this suite. • [SLOW TEST:12.162 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":224,"skipped":3590,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:09:31.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 22 22:09:36.143: INFO: Successfully updated pod "pod-update-activedeadlineseconds-5c130ebb-3f42-40ba-b3a8-df46ae687363" Apr 22 22:09:36.143: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-5c130ebb-3f42-40ba-b3a8-df46ae687363" in namespace "pods-2319" to be "terminated due to deadline exceeded" Apr 22 22:09:36.150: INFO: Pod "pod-update-activedeadlineseconds-5c130ebb-3f42-40ba-b3a8-df46ae687363": Phase="Running", Reason="", readiness=true. Elapsed: 7.474981ms Apr 22 22:09:38.155: INFO: Pod "pod-update-activedeadlineseconds-5c130ebb-3f42-40ba-b3a8-df46ae687363": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.011700961s Apr 22 22:09:38.155: INFO: Pod "pod-update-activedeadlineseconds-5c130ebb-3f42-40ba-b3a8-df46ae687363" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:09:38.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2319" for this suite. • [SLOW TEST:6.686 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":225,"skipped":3599,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:09:38.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs Apr 22 22:09:38.255: INFO: Waiting up to 5m0s for pod "pod-2554680a-8f18-4eea-9212-7b6ae35134b8" in namespace "emptydir-4287" to be "success or failure" Apr 22 22:09:38.264: INFO: Pod "pod-2554680a-8f18-4eea-9212-7b6ae35134b8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.742676ms Apr 22 22:09:40.268: INFO: Pod "pod-2554680a-8f18-4eea-9212-7b6ae35134b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012918663s Apr 22 22:09:42.272: INFO: Pod "pod-2554680a-8f18-4eea-9212-7b6ae35134b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017101978s STEP: Saw pod success Apr 22 22:09:42.272: INFO: Pod "pod-2554680a-8f18-4eea-9212-7b6ae35134b8" satisfied condition "success or failure" Apr 22 22:09:42.276: INFO: Trying to get logs from node jerma-worker pod pod-2554680a-8f18-4eea-9212-7b6ae35134b8 container test-container: STEP: delete the pod Apr 22 22:09:42.303: INFO: Waiting for pod pod-2554680a-8f18-4eea-9212-7b6ae35134b8 to disappear Apr 22 22:09:42.313: INFO: Pod pod-2554680a-8f18-4eea-9212-7b6ae35134b8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:09:42.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4287" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":226,"skipped":3625,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:09:42.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:10:42.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2858" for this suite. • [SLOW TEST:60.091 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":227,"skipped":3644,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:10:42.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Apr 22 22:10:42.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5108' Apr 22 22:10:42.787: INFO: stderr: "" Apr 22 22:10:42.787: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 22 22:10:42.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5108' Apr 22 22:10:42.904: INFO: stderr: "" Apr 22 22:10:42.904: INFO: stdout: "update-demo-nautilus-5qqvp update-demo-nautilus-jhcdf " Apr 22 22:10:42.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5qqvp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5108' Apr 22 22:10:43.008: INFO: stderr: "" Apr 22 22:10:43.008: INFO: stdout: "" Apr 22 22:10:43.008: INFO: update-demo-nautilus-5qqvp is created but not running Apr 22 22:10:48.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5108' Apr 22 22:10:48.151: INFO: stderr: "" Apr 22 22:10:48.151: INFO: stdout: "update-demo-nautilus-5qqvp update-demo-nautilus-jhcdf " Apr 22 22:10:48.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5qqvp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5108' Apr 22 22:10:48.254: INFO: stderr: "" Apr 22 22:10:48.254: INFO: stdout: "true" Apr 22 22:10:48.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5qqvp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5108' Apr 22 22:10:48.353: INFO: stderr: "" Apr 22 22:10:48.353: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 22 22:10:48.353: INFO: validating pod update-demo-nautilus-5qqvp Apr 22 22:10:48.358: INFO: got data: { "image": "nautilus.jpg" } Apr 22 22:10:48.358: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 22 22:10:48.358: INFO: update-demo-nautilus-5qqvp is verified up and running Apr 22 22:10:48.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jhcdf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5108' Apr 22 22:10:48.449: INFO: stderr: "" Apr 22 22:10:48.449: INFO: stdout: "true" Apr 22 22:10:48.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jhcdf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5108' Apr 22 22:10:48.539: INFO: stderr: "" Apr 22 22:10:48.539: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 22 22:10:48.540: INFO: validating pod update-demo-nautilus-jhcdf Apr 22 22:10:48.543: INFO: got data: { "image": "nautilus.jpg" } Apr 22 22:10:48.543: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 22 22:10:48.543: INFO: update-demo-nautilus-jhcdf is verified up and running STEP: using delete to clean up resources Apr 22 22:10:48.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5108' Apr 22 22:10:48.653: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 22 22:10:48.653: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 22 22:10:48.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5108' Apr 22 22:10:48.749: INFO: stderr: "No resources found in kubectl-5108 namespace.\n" Apr 22 22:10:48.749: INFO: stdout: "" Apr 22 22:10:48.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5108 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 22 22:10:48.849: INFO: stderr: "" Apr 22 22:10:48.849: INFO: stdout: "update-demo-nautilus-5qqvp\nupdate-demo-nautilus-jhcdf\n" Apr 22 22:10:49.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5108' Apr 22 22:10:49.516: INFO: stderr: "No resources found in kubectl-5108 namespace.\n" Apr 22 22:10:49.516: INFO: stdout: "" Apr 22 22:10:49.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5108 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 22 22:10:49.616: INFO: stderr: "" Apr 22 22:10:49.616: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:10:49.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5108" for this suite. • [SLOW TEST:7.268 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":228,"skipped":3662,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:10:49.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Apr 22 22:10:54.407: INFO: Successfully updated pod "labelsupdate251ec380-ceaa-4f18-bd1d-ff80161629cf" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:10:56.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-76" for this suite. • [SLOW TEST:7.124 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":229,"skipped":3664,"failed":0} [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:10:56.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 22 22:10:57.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Apr 22 22:10:57.587: INFO: stderr: "" Apr 22 22:10:57.587: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.4\", GitCommit:\"8d8aa39598534325ad77120c120a22b3a990b5ea\", GitTreeState:\"clean\", BuildDate:\"2020-04-05T10:48:13Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:10:57.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2677" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":230,"skipped":3664,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:10:57.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 22 22:10:57.664: INFO: Waiting up to 5m0s for pod "pod-d6233679-fd92-4160-a44d-971903e5739b" in namespace "emptydir-9845" to be "success or failure" Apr 22 22:10:57.668: INFO: Pod "pod-d6233679-fd92-4160-a44d-971903e5739b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.814659ms Apr 22 22:10:59.672: INFO: Pod "pod-d6233679-fd92-4160-a44d-971903e5739b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007755051s Apr 22 22:11:01.675: INFO: Pod "pod-d6233679-fd92-4160-a44d-971903e5739b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010918644s STEP: Saw pod success Apr 22 22:11:01.675: INFO: Pod "pod-d6233679-fd92-4160-a44d-971903e5739b" satisfied condition "success or failure" Apr 22 22:11:01.678: INFO: Trying to get logs from node jerma-worker2 pod pod-d6233679-fd92-4160-a44d-971903e5739b container test-container: STEP: delete the pod Apr 22 22:11:01.733: INFO: Waiting for pod pod-d6233679-fd92-4160-a44d-971903e5739b to disappear Apr 22 22:11:01.754: INFO: Pod pod-d6233679-fd92-4160-a44d-971903e5739b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:11:01.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9845" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":231,"skipped":3693,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:11:01.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-11f8d9ef-dcaa-4268-a336-645fbe7cc231 STEP: Creating a pod to test consume configMaps Apr 22 22:11:01.874: INFO: Waiting up to 5m0s for pod "pod-configmaps-e000a69d-a997-4133-a070-62aff543e2eb" in namespace "configmap-3722" to be "success or failure" Apr 22 22:11:01.900: INFO: Pod "pod-configmaps-e000a69d-a997-4133-a070-62aff543e2eb": Phase="Pending", Reason="", readiness=false. Elapsed: 26.210436ms Apr 22 22:11:03.905: INFO: Pod "pod-configmaps-e000a69d-a997-4133-a070-62aff543e2eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030846882s Apr 22 22:11:05.909: INFO: Pod "pod-configmaps-e000a69d-a997-4133-a070-62aff543e2eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034429041s STEP: Saw pod success Apr 22 22:11:05.909: INFO: Pod "pod-configmaps-e000a69d-a997-4133-a070-62aff543e2eb" satisfied condition "success or failure" Apr 22 22:11:05.912: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-e000a69d-a997-4133-a070-62aff543e2eb container configmap-volume-test: STEP: delete the pod Apr 22 22:11:05.946: INFO: Waiting for pod pod-configmaps-e000a69d-a997-4133-a070-62aff543e2eb to disappear Apr 22 22:11:05.962: INFO: Pod pod-configmaps-e000a69d-a997-4133-a070-62aff543e2eb no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:11:05.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3722" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":232,"skipped":3781,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:11:05.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Apr 22 22:11:12.618: INFO: Successfully updated pod "adopt-release-7j2k4" STEP: Checking that the Job readopts the Pod Apr 22 22:11:12.618: INFO: Waiting up to 15m0s for pod "adopt-release-7j2k4" in namespace "job-7142" to be "adopted" Apr 22 22:11:12.625: INFO: Pod "adopt-release-7j2k4": Phase="Running", Reason="", readiness=true. Elapsed: 6.807786ms Apr 22 22:11:14.629: INFO: Pod "adopt-release-7j2k4": Phase="Running", Reason="", readiness=true. Elapsed: 2.010894889s Apr 22 22:11:14.629: INFO: Pod "adopt-release-7j2k4" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Apr 22 22:11:15.136: INFO: Successfully updated pod "adopt-release-7j2k4" STEP: Checking that the Job releases the Pod Apr 22 22:11:15.136: INFO: Waiting up to 15m0s for pod "adopt-release-7j2k4" in namespace "job-7142" to be "released" Apr 22 22:11:15.152: INFO: Pod "adopt-release-7j2k4": Phase="Running", Reason="", readiness=true. Elapsed: 15.838307ms Apr 22 22:11:17.156: INFO: Pod "adopt-release-7j2k4": Phase="Running", Reason="", readiness=true. Elapsed: 2.019860565s Apr 22 22:11:17.156: INFO: Pod "adopt-release-7j2k4" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:11:17.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7142" for this suite. • [SLOW TEST:11.198 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":233,"skipped":3796,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:11:17.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-9303 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-9303 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9303 Apr 22 22:11:17.262: INFO: Found 0 stateful pods, waiting for 1 Apr 22 22:11:27.267: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 22 22:11:27.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9303 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 22 22:11:27.509: INFO: stderr: "I0422 22:11:27.396808 3489 log.go:172] (0xc000796a50) (0xc000762000) Create stream\nI0422 22:11:27.396881 3489 log.go:172] (0xc000796a50) (0xc000762000) Stream added, broadcasting: 1\nI0422 22:11:27.399167 3489 log.go:172] (0xc000796a50) Reply frame received for 1\nI0422 22:11:27.399204 3489 log.go:172] (0xc000796a50) (0xc0009f0000) Create stream\nI0422 22:11:27.399217 3489 log.go:172] (0xc000796a50) (0xc0009f0000) Stream added, broadcasting: 3\nI0422 22:11:27.400091 3489 log.go:172] (0xc000796a50) Reply frame received for 3\nI0422 22:11:27.400130 3489 log.go:172] (0xc000796a50) (0xc000579a40) Create stream\nI0422 22:11:27.400159 3489 log.go:172] (0xc000796a50) (0xc000579a40) Stream added, broadcasting: 5\nI0422 22:11:27.401025 3489 log.go:172] (0xc000796a50) Reply frame received for 5\nI0422 22:11:27.476766 3489 log.go:172] (0xc000796a50) Data frame received for 5\nI0422 22:11:27.476790 3489 log.go:172] (0xc000579a40) (5) Data frame handling\nI0422 22:11:27.476807 3489 log.go:172] (0xc000579a40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0422 22:11:27.503607 3489 log.go:172] (0xc000796a50) Data frame received for 5\nI0422 22:11:27.503633 3489 log.go:172] (0xc000579a40) (5) Data frame handling\nI0422 22:11:27.503654 3489 log.go:172] (0xc000796a50) Data frame received for 3\nI0422 22:11:27.503661 3489 log.go:172] (0xc0009f0000) (3) Data frame handling\nI0422 22:11:27.503671 3489 log.go:172] (0xc0009f0000) (3) Data frame sent\nI0422 22:11:27.503679 3489 log.go:172] (0xc000796a50) Data frame received for 3\nI0422 22:11:27.503686 3489 log.go:172] (0xc0009f0000) (3) Data frame handling\nI0422 22:11:27.505409 3489 log.go:172] (0xc000796a50) Data frame received for 1\nI0422 22:11:27.505452 3489 log.go:172] (0xc000762000) (1) Data frame handling\nI0422 22:11:27.505468 3489 log.go:172] (0xc000762000) (1) Data frame sent\nI0422 22:11:27.505481 3489 log.go:172] (0xc000796a50) (0xc000762000) Stream removed, broadcasting: 1\nI0422 22:11:27.505498 3489 log.go:172] (0xc000796a50) Go away received\nI0422 22:11:27.505772 3489 log.go:172] (0xc000796a50) (0xc000762000) Stream removed, broadcasting: 1\nI0422 22:11:27.505789 3489 log.go:172] (0xc000796a50) (0xc0009f0000) Stream removed, broadcasting: 3\nI0422 22:11:27.505797 3489 log.go:172] (0xc000796a50) (0xc000579a40) Stream removed, broadcasting: 5\n" Apr 22 22:11:27.509: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 22 22:11:27.509: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 22 22:11:27.512: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 22 22:11:37.764: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 22 22:11:37.764: INFO: Waiting for statefulset status.replicas updated to 0 Apr 22 22:11:37.787: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999367s Apr 22 22:11:39.076: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.996508164s Apr 22 22:11:40.080: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.707707231s Apr 22 22:11:41.085: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.703091297s Apr 22 22:11:42.089: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.698394962s Apr 22 22:11:43.884: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.694505196s Apr 22 22:11:44.889: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.899348551s Apr 22 22:11:45.893: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.89507332s Apr 22 22:11:46.897: INFO: Verifying statefulset ss doesn't scale past 1 for another 890.725644ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9303 Apr 22 22:11:47.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9303 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 22 22:11:48.097: INFO: stderr: "I0422 22:11:48.035509 3512 log.go:172] (0xc0003c0fd0) (0xc0008b0000) Create stream\nI0422 22:11:48.035562 3512 log.go:172] (0xc0003c0fd0) (0xc0008b0000) Stream added, broadcasting: 1\nI0422 22:11:48.037877 3512 log.go:172] (0xc0003c0fd0) Reply frame received for 1\nI0422 22:11:48.037915 3512 log.go:172] (0xc0003c0fd0) (0xc0006f5a40) Create stream\nI0422 22:11:48.037924 3512 log.go:172] (0xc0003c0fd0) (0xc0006f5a40) Stream added, broadcasting: 3\nI0422 22:11:48.038832 3512 log.go:172] (0xc0003c0fd0) Reply frame received for 3\nI0422 22:11:48.038874 3512 log.go:172] (0xc0003c0fd0) (0xc0003ca000) Create stream\nI0422 22:11:48.038886 3512 log.go:172] (0xc0003c0fd0) (0xc0003ca000) Stream added, broadcasting: 5\nI0422 22:11:48.039717 3512 log.go:172] (0xc0003c0fd0) Reply frame received for 5\nI0422 22:11:48.089928 3512 log.go:172] (0xc0003c0fd0) Data frame received for 3\nI0422 22:11:48.090013 3512 log.go:172] (0xc0003c0fd0) Data frame received for 5\nI0422 22:11:48.090059 3512 log.go:172] (0xc0003ca000) (5) Data frame handling\nI0422 22:11:48.090088 3512 log.go:172] (0xc0003ca000) (5) Data frame sent\nI0422 22:11:48.090100 3512 log.go:172] (0xc0003c0fd0) Data frame received for 5\nI0422 22:11:48.090111 3512 log.go:172] (0xc0003ca000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0422 22:11:48.090134 3512 log.go:172] (0xc0006f5a40) (3) Data frame handling\nI0422 22:11:48.090158 3512 log.go:172] (0xc0006f5a40) (3) Data frame sent\nI0422 22:11:48.090174 3512 log.go:172] (0xc0003c0fd0) Data frame received for 3\nI0422 22:11:48.090184 3512 log.go:172] (0xc0006f5a40) (3) Data frame handling\nI0422 22:11:48.091348 3512 log.go:172] (0xc0003c0fd0) Data frame received for 1\nI0422 22:11:48.091376 3512 log.go:172] (0xc0008b0000) (1) Data frame handling\nI0422 22:11:48.091387 3512 log.go:172] (0xc0008b0000) (1) Data frame sent\nI0422 22:11:48.091456 3512 log.go:172] (0xc0003c0fd0) (0xc0008b0000) Stream removed, broadcasting: 1\nI0422 22:11:48.091694 3512 log.go:172] (0xc0003c0fd0) Go away received\nI0422 22:11:48.091798 3512 log.go:172] (0xc0003c0fd0) (0xc0008b0000) Stream removed, broadcasting: 1\nI0422 22:11:48.091815 3512 log.go:172] (0xc0003c0fd0) (0xc0006f5a40) Stream removed, broadcasting: 3\nI0422 22:11:48.091827 3512 log.go:172] (0xc0003c0fd0) (0xc0003ca000) Stream removed, broadcasting: 5\n" Apr 22 22:11:48.097: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 22 22:11:48.097: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 22 22:11:48.101: INFO: Found 1 stateful pods, waiting for 3 Apr 22 22:11:58.105: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 22 22:11:58.105: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 22 22:11:58.105: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Apr 22 22:11:58.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9303 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 22 22:11:58.326: INFO: stderr: "I0422 22:11:58.255079 3531 log.go:172] (0xc000802a50) (0xc000665ea0) Create stream\nI0422 22:11:58.255160 3531 log.go:172] (0xc000802a50) (0xc000665ea0) Stream added, broadcasting: 1\nI0422 22:11:58.258134 3531 log.go:172] (0xc000802a50) Reply frame received for 1\nI0422 22:11:58.258176 3531 log.go:172] (0xc000802a50) (0xc000586780) Create stream\nI0422 22:11:58.258185 3531 log.go:172] (0xc000802a50) (0xc000586780) Stream added, broadcasting: 3\nI0422 22:11:58.259024 3531 log.go:172] (0xc000802a50) Reply frame received for 3\nI0422 22:11:58.259054 3531 log.go:172] (0xc000802a50) (0xc000665f40) Create stream\nI0422 22:11:58.259066 3531 log.go:172] (0xc000802a50) (0xc000665f40) Stream added, broadcasting: 5\nI0422 22:11:58.259847 3531 log.go:172] (0xc000802a50) Reply frame received for 5\nI0422 22:11:58.318296 3531 log.go:172] (0xc000802a50) Data frame received for 3\nI0422 22:11:58.318331 3531 log.go:172] (0xc000586780) (3) Data frame handling\nI0422 22:11:58.318341 3531 log.go:172] (0xc000586780) (3) Data frame sent\nI0422 22:11:58.318347 3531 log.go:172] (0xc000802a50) Data frame received for 3\nI0422 22:11:58.318352 3531 log.go:172] (0xc000586780) (3) Data frame handling\nI0422 22:11:58.318374 3531 log.go:172] (0xc000802a50) Data frame received for 5\nI0422 22:11:58.318381 3531 log.go:172] (0xc000665f40) (5) Data frame handling\nI0422 22:11:58.318389 3531 log.go:172] (0xc000665f40) (5) Data frame sent\nI0422 22:11:58.318398 3531 log.go:172] (0xc000802a50) Data frame received for 5\nI0422 22:11:58.318404 3531 log.go:172] (0xc000665f40) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0422 22:11:58.319817 3531 log.go:172] (0xc000802a50) Data frame received for 1\nI0422 22:11:58.319853 3531 log.go:172] (0xc000665ea0) (1) Data frame handling\nI0422 22:11:58.319868 3531 log.go:172] (0xc000665ea0) (1) Data frame sent\nI0422 22:11:58.319890 3531 log.go:172] (0xc000802a50) (0xc000665ea0) Stream removed, broadcasting: 1\nI0422 22:11:58.319913 3531 log.go:172] (0xc000802a50) Go away received\nI0422 22:11:58.320495 3531 log.go:172] (0xc000802a50) (0xc000665ea0) Stream removed, broadcasting: 1\nI0422 22:11:58.320535 3531 log.go:172] (0xc000802a50) (0xc000586780) Stream removed, broadcasting: 3\nI0422 22:11:58.320553 3531 log.go:172] (0xc000802a50) (0xc000665f40) Stream removed, broadcasting: 5\n" Apr 22 22:11:58.326: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 22 22:11:58.326: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 22 22:11:58.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9303 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 22 22:11:58.608: INFO: stderr: "I0422 22:11:58.481800 3554 log.go:172] (0xc000b49290) (0xc000b885a0) Create stream\nI0422 22:11:58.481882 3554 log.go:172] (0xc000b49290) (0xc000b885a0) Stream added, broadcasting: 1\nI0422 22:11:58.483452 3554 log.go:172] (0xc000b49290) Reply frame received for 1\nI0422 22:11:58.483520 3554 log.go:172] (0xc000b49290) (0xc000ab2140) Create stream\nI0422 22:11:58.483549 3554 log.go:172] (0xc000b49290) (0xc000ab2140) Stream added, broadcasting: 3\nI0422 22:11:58.484418 3554 log.go:172] (0xc000b49290) Reply frame received for 3\nI0422 22:11:58.484463 3554 log.go:172] (0xc000b49290) (0xc000b88640) Create stream\nI0422 22:11:58.484494 3554 log.go:172] (0xc000b49290) (0xc000b88640) Stream added, broadcasting: 5\nI0422 22:11:58.485533 3554 log.go:172] (0xc000b49290) Reply frame received for 5\nI0422 22:11:58.563941 3554 log.go:172] (0xc000b49290) Data frame received for 5\nI0422 22:11:58.563993 3554 log.go:172] (0xc000b88640) (5) Data frame handling\nI0422 22:11:58.564022 3554 log.go:172] (0xc000b88640) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0422 22:11:58.598656 3554 log.go:172] (0xc000b49290) Data frame received for 3\nI0422 22:11:58.598697 3554 log.go:172] (0xc000ab2140) (3) Data frame handling\nI0422 22:11:58.598720 3554 log.go:172] (0xc000ab2140) (3) Data frame sent\nI0422 22:11:58.598740 3554 log.go:172] (0xc000b49290) Data frame received for 3\nI0422 22:11:58.598872 3554 log.go:172] (0xc000ab2140) (3) Data frame handling\nI0422 22:11:58.598919 3554 log.go:172] (0xc000b49290) Data frame received for 5\nI0422 22:11:58.598937 3554 log.go:172] (0xc000b88640) (5) Data frame handling\nI0422 22:11:58.600990 3554 log.go:172] (0xc000b49290) Data frame received for 1\nI0422 22:11:58.601010 3554 log.go:172] (0xc000b885a0) (1) Data frame handling\nI0422 22:11:58.601028 3554 log.go:172] (0xc000b885a0) (1) Data frame sent\nI0422 22:11:58.601043 3554 log.go:172] (0xc000b49290) (0xc000b885a0) Stream removed, broadcasting: 1\nI0422 22:11:58.601058 3554 log.go:172] (0xc000b49290) Go away received\nI0422 22:11:58.601700 3554 log.go:172] (0xc000b49290) (0xc000b885a0) Stream removed, broadcasting: 1\nI0422 22:11:58.601733 3554 log.go:172] (0xc000b49290) (0xc000ab2140) Stream removed, broadcasting: 3\nI0422 22:11:58.601747 3554 log.go:172] (0xc000b49290) (0xc000b88640) Stream removed, broadcasting: 5\n" Apr 22 22:11:58.608: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 22 22:11:58.608: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 22 22:11:58.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9303 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 22 22:11:58.851: INFO: stderr: "I0422 22:11:58.750444 3574 log.go:172] (0xc0000ec2c0) (0xc00073b360) Create stream\nI0422 22:11:58.750505 3574 log.go:172] (0xc0000ec2c0) (0xc00073b360) Stream added, broadcasting: 1\nI0422 22:11:58.752561 3574 log.go:172] (0xc0000ec2c0) Reply frame received for 1\nI0422 22:11:58.752614 3574 log.go:172] (0xc0000ec2c0) (0xc0006cb900) Create stream\nI0422 22:11:58.752635 3574 log.go:172] (0xc0000ec2c0) (0xc0006cb900) Stream added, broadcasting: 3\nI0422 22:11:58.753594 3574 log.go:172] (0xc0000ec2c0) Reply frame received for 3\nI0422 22:11:58.753623 3574 log.go:172] (0xc0000ec2c0) (0xc000a20000) Create stream\nI0422 22:11:58.753632 3574 log.go:172] (0xc0000ec2c0) (0xc000a20000) Stream added, broadcasting: 5\nI0422 22:11:58.754509 3574 log.go:172] (0xc0000ec2c0) Reply frame received for 5\nI0422 22:11:58.809503 3574 log.go:172] (0xc0000ec2c0) Data frame received for 5\nI0422 22:11:58.809530 3574 log.go:172] (0xc000a20000) (5) Data frame handling\nI0422 22:11:58.809551 3574 log.go:172] (0xc000a20000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0422 22:11:58.844038 3574 log.go:172] (0xc0000ec2c0) Data frame received for 3\nI0422 22:11:58.844090 3574 log.go:172] (0xc0006cb900) (3) Data frame handling\nI0422 22:11:58.844123 3574 log.go:172] (0xc0006cb900) (3) Data frame sent\nI0422 22:11:58.844575 3574 log.go:172] (0xc0000ec2c0) Data frame received for 3\nI0422 22:11:58.844602 3574 log.go:172] (0xc0006cb900) (3) Data frame handling\nI0422 22:11:58.844761 3574 log.go:172] (0xc0000ec2c0) Data frame received for 5\nI0422 22:11:58.844797 3574 log.go:172] (0xc000a20000) (5) Data frame handling\nI0422 22:11:58.847180 3574 log.go:172] (0xc0000ec2c0) Data frame received for 1\nI0422 22:11:58.847195 3574 log.go:172] (0xc00073b360) (1) Data frame handling\nI0422 22:11:58.847201 3574 log.go:172] (0xc00073b360) (1) Data frame sent\nI0422 22:11:58.847209 3574 log.go:172] (0xc0000ec2c0) (0xc00073b360) Stream removed, broadcasting: 1\nI0422 22:11:58.847218 3574 log.go:172] (0xc0000ec2c0) Go away received\nI0422 22:11:58.847552 3574 log.go:172] (0xc0000ec2c0) (0xc00073b360) Stream removed, broadcasting: 1\nI0422 22:11:58.847566 3574 log.go:172] (0xc0000ec2c0) (0xc0006cb900) Stream removed, broadcasting: 3\nI0422 22:11:58.847571 3574 log.go:172] (0xc0000ec2c0) (0xc000a20000) Stream removed, broadcasting: 5\n" Apr 22 22:11:58.851: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 22 22:11:58.851: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 22 22:11:58.851: INFO: Waiting for statefulset status.replicas updated to 0 Apr 22 22:11:58.855: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Apr 22 22:12:08.862: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 22 22:12:08.862: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 22 22:12:08.862: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 22 22:12:08.879: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999442s Apr 22 22:12:09.884: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.98954786s Apr 22 22:12:10.903: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.98415255s Apr 22 22:12:11.907: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.965013942s Apr 22 22:12:12.912: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.960914999s Apr 22 22:12:13.933: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.956519181s Apr 22 22:12:14.939: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.935144283s Apr 22 22:12:15.944: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.928663651s Apr 22 22:12:16.951: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.923954502s Apr 22 22:12:17.956: INFO: Verifying statefulset ss doesn't scale past 3 for another 917.359777ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9303 Apr 22 22:12:18.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9303 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 22 22:12:19.183: INFO: stderr: "I0422 22:12:19.100860 3595 log.go:172] (0xc0009300b0) (0xc0006c7f40) Create stream\nI0422 22:12:19.100927 3595 log.go:172] (0xc0009300b0) (0xc0006c7f40) Stream added, broadcasting: 1\nI0422 22:12:19.103300 3595 log.go:172] (0xc0009300b0) Reply frame received for 1\nI0422 22:12:19.103336 3595 log.go:172] (0xc0009300b0) (0xc00067c8c0) Create stream\nI0422 22:12:19.103350 3595 log.go:172] (0xc0009300b0) (0xc00067c8c0) Stream added, broadcasting: 3\nI0422 22:12:19.104171 3595 log.go:172] (0xc0009300b0) Reply frame received for 3\nI0422 22:12:19.104215 3595 log.go:172] (0xc0009300b0) (0xc000920000) Create stream\nI0422 22:12:19.104235 3595 log.go:172] (0xc0009300b0) (0xc000920000) Stream added, broadcasting: 5\nI0422 22:12:19.105040 3595 log.go:172] (0xc0009300b0) Reply frame received for 5\nI0422 22:12:19.176261 3595 log.go:172] (0xc0009300b0) Data frame received for 3\nI0422 22:12:19.176306 3595 log.go:172] (0xc00067c8c0) (3) Data frame handling\nI0422 22:12:19.176329 3595 log.go:172] (0xc00067c8c0) (3) Data frame sent\nI0422 22:12:19.176344 3595 log.go:172] (0xc0009300b0) Data frame received for 3\nI0422 22:12:19.176358 3595 log.go:172] (0xc00067c8c0) (3) Data frame handling\nI0422 22:12:19.176436 3595 log.go:172] (0xc0009300b0) Data frame received for 5\nI0422 22:12:19.176482 3595 log.go:172] (0xc000920000) (5) Data frame handling\nI0422 22:12:19.176512 3595 log.go:172] (0xc000920000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0422 22:12:19.176535 3595 log.go:172] (0xc0009300b0) Data frame received for 5\nI0422 22:12:19.176589 3595 log.go:172] (0xc000920000) (5) Data frame handling\nI0422 22:12:19.177982 3595 log.go:172] (0xc0009300b0) Data frame received for 1\nI0422 22:12:19.178012 3595 log.go:172] (0xc0006c7f40) (1) Data frame handling\nI0422 22:12:19.178028 3595 log.go:172] (0xc0006c7f40) (1) Data frame sent\nI0422 22:12:19.178042 3595 log.go:172] (0xc0009300b0) (0xc0006c7f40) Stream removed, broadcasting: 1\nI0422 22:12:19.178061 3595 log.go:172] (0xc0009300b0) Go away received\nI0422 22:12:19.178542 3595 log.go:172] (0xc0009300b0) (0xc0006c7f40) Stream removed, broadcasting: 1\nI0422 22:12:19.178570 3595 log.go:172] (0xc0009300b0) (0xc00067c8c0) Stream removed, broadcasting: 3\nI0422 22:12:19.178585 3595 log.go:172] (0xc0009300b0) (0xc000920000) Stream removed, broadcasting: 5\n" Apr 22 22:12:19.183: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 22 22:12:19.183: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 22 22:12:19.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9303 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 22 22:12:19.402: INFO: stderr: "I0422 22:12:19.314557 3616 log.go:172] (0xc0009e6840) (0xc000629ea0) Create stream\nI0422 22:12:19.314633 3616 log.go:172] (0xc0009e6840) (0xc000629ea0) Stream added, broadcasting: 1\nI0422 22:12:19.319157 3616 log.go:172] (0xc0009e6840) Reply frame received for 1\nI0422 22:12:19.319196 3616 log.go:172] (0xc0009e6840) (0xc0005d4640) Create stream\nI0422 22:12:19.319216 3616 log.go:172] (0xc0009e6840) (0xc0005d4640) Stream added, broadcasting: 3\nI0422 22:12:19.320306 3616 log.go:172] (0xc0009e6840) Reply frame received for 3\nI0422 22:12:19.320351 3616 log.go:172] (0xc0009e6840) (0xc000752c80) Create stream\nI0422 22:12:19.320366 3616 log.go:172] (0xc0009e6840) (0xc000752c80) Stream added, broadcasting: 5\nI0422 22:12:19.322079 3616 log.go:172] (0xc0009e6840) Reply frame received for 5\nI0422 22:12:19.394401 3616 log.go:172] (0xc0009e6840) Data frame received for 3\nI0422 22:12:19.394444 3616 log.go:172] (0xc0005d4640) (3) Data frame handling\nI0422 22:12:19.394486 3616 log.go:172] (0xc0005d4640) (3) Data frame sent\nI0422 22:12:19.394504 3616 log.go:172] (0xc0009e6840) Data frame received for 3\nI0422 22:12:19.394516 3616 log.go:172] (0xc0005d4640) (3) Data frame handling\nI0422 22:12:19.394549 3616 log.go:172] (0xc0009e6840) Data frame received for 5\nI0422 22:12:19.394582 3616 log.go:172] (0xc000752c80) (5) Data frame handling\nI0422 22:12:19.394613 3616 log.go:172] (0xc000752c80) (5) Data frame sent\nI0422 22:12:19.394638 3616 log.go:172] (0xc0009e6840) Data frame received for 5\nI0422 22:12:19.394649 3616 log.go:172] (0xc000752c80) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0422 22:12:19.396405 3616 log.go:172] (0xc0009e6840) Data frame received for 1\nI0422 22:12:19.396437 3616 log.go:172] (0xc000629ea0) (1) Data frame handling\nI0422 22:12:19.396466 3616 log.go:172] (0xc000629ea0) (1) Data frame sent\nI0422 22:12:19.396489 3616 log.go:172] (0xc0009e6840) (0xc000629ea0) Stream removed, broadcasting: 1\nI0422 22:12:19.396581 3616 log.go:172] (0xc0009e6840) Go away received\nI0422 22:12:19.397009 3616 log.go:172] (0xc0009e6840) (0xc000629ea0) Stream removed, broadcasting: 1\nI0422 22:12:19.397030 3616 log.go:172] (0xc0009e6840) (0xc0005d4640) Stream removed, broadcasting: 3\nI0422 22:12:19.397049 3616 log.go:172] (0xc0009e6840) (0xc000752c80) Stream removed, broadcasting: 5\n" Apr 22 22:12:19.402: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 22 22:12:19.402: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 22 22:12:19.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9303 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 22 22:12:19.609: INFO: stderr: "I0422 22:12:19.532072 3639 log.go:172] (0xc000974790) (0xc000aa0000) Create stream\nI0422 22:12:19.532139 3639 log.go:172] (0xc000974790) (0xc000aa0000) Stream added, broadcasting: 1\nI0422 22:12:19.535005 3639 log.go:172] (0xc000974790) Reply frame received for 1\nI0422 22:12:19.535055 3639 log.go:172] (0xc000974790) (0xc000aa00a0) Create stream\nI0422 22:12:19.535070 3639 log.go:172] (0xc000974790) (0xc000aa00a0) Stream added, broadcasting: 3\nI0422 22:12:19.536263 3639 log.go:172] (0xc000974790) Reply frame received for 3\nI0422 22:12:19.536313 3639 log.go:172] (0xc000974790) (0xc000667c20) Create stream\nI0422 22:12:19.536331 3639 log.go:172] (0xc000974790) (0xc000667c20) Stream added, broadcasting: 5\nI0422 22:12:19.537649 3639 log.go:172] (0xc000974790) Reply frame received for 5\nI0422 22:12:19.600539 3639 log.go:172] (0xc000974790) Data frame received for 3\nI0422 22:12:19.600586 3639 log.go:172] (0xc000aa00a0) (3) Data frame handling\nI0422 22:12:19.600600 3639 log.go:172] (0xc000aa00a0) (3) Data frame sent\nI0422 22:12:19.600611 3639 log.go:172] (0xc000974790) Data frame received for 3\nI0422 22:12:19.600622 3639 log.go:172] (0xc000aa00a0) (3) Data frame handling\nI0422 22:12:19.600716 3639 log.go:172] (0xc000974790) Data frame received for 5\nI0422 22:12:19.600765 3639 log.go:172] (0xc000667c20) (5) Data frame handling\nI0422 22:12:19.600811 3639 log.go:172] (0xc000667c20) (5) Data frame sent\nI0422 22:12:19.600836 3639 log.go:172] (0xc000974790) Data frame received for 5\nI0422 22:12:19.600847 3639 log.go:172] (0xc000667c20) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0422 22:12:19.602495 3639 log.go:172] (0xc000974790) Data frame received for 1\nI0422 22:12:19.602525 3639 log.go:172] (0xc000aa0000) (1) Data frame handling\nI0422 22:12:19.602562 3639 log.go:172] (0xc000aa0000) (1) Data frame sent\nI0422 22:12:19.602587 3639 log.go:172] (0xc000974790) (0xc000aa0000) Stream removed, broadcasting: 1\nI0422 22:12:19.602736 3639 log.go:172] (0xc000974790) Go away received\nI0422 22:12:19.603117 3639 log.go:172] (0xc000974790) (0xc000aa0000) Stream removed, broadcasting: 1\nI0422 22:12:19.603154 3639 log.go:172] (0xc000974790) (0xc000aa00a0) Stream removed, broadcasting: 3\nI0422 22:12:19.603169 3639 log.go:172] (0xc000974790) (0xc000667c20) Stream removed, broadcasting: 5\n" Apr 22 22:12:19.610: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 22 22:12:19.610: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 22 22:12:19.610: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 22 22:12:49.624: INFO: Deleting all statefulset in ns statefulset-9303 Apr 22 22:12:49.627: INFO: Scaling statefulset ss to 0 Apr 22 22:12:49.635: INFO: Waiting for statefulset status.replicas updated to 0 Apr 22 22:12:49.637: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:12:49.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9303" for this suite. • [SLOW TEST:92.494 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":234,"skipped":3798,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:12:49.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0422 22:13:20.256217 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 22 22:13:20.256: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:13:20.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3418" for this suite. • [SLOW TEST:30.601 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":235,"skipped":3824,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:13:20.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-c06bf73d-d869-4884-9b1f-04c205dc56bb STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-c06bf73d-d869-4884-9b1f-04c205dc56bb STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:13:26.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5560" for this suite. • [SLOW TEST:6.236 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":3832,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:13:26.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 22 22:13:26.635: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"b2ae0443-39be-41fc-9663-3026a962d823", Controller:(*bool)(0xc000f0fc2a), BlockOwnerDeletion:(*bool)(0xc000f0fc2b)}} Apr 22 22:13:26.665: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"2d559f23-a487-4f3a-98d7-0df014feb2ee", Controller:(*bool)(0xc002f0418a), BlockOwnerDeletion:(*bool)(0xc002f0418b)}} Apr 22 22:13:26.683: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"c8b87bf5-c546-4543-8638-a37f09564fba", Controller:(*bool)(0xc002e6222a), BlockOwnerDeletion:(*bool)(0xc002e6222b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:13:31.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5268" for this suite. • [SLOW TEST:5.265 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":237,"skipped":3851,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:13:31.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token Apr 22 22:13:32.529: INFO: created pod pod-service-account-defaultsa Apr 22 22:13:32.529: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 22 22:13:32.571: INFO: created pod pod-service-account-mountsa Apr 22 22:13:32.571: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 22 22:13:32.590: INFO: created pod pod-service-account-nomountsa Apr 22 22:13:32.590: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 22 22:13:32.671: INFO: created pod pod-service-account-defaultsa-mountspec Apr 22 22:13:32.671: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 22 22:13:32.685: INFO: created pod pod-service-account-mountsa-mountspec Apr 22 22:13:32.685: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 22 22:13:32.741: INFO: created pod pod-service-account-nomountsa-mountspec Apr 22 22:13:32.741: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 22 22:13:32.758: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 22 22:13:32.758: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 22 22:13:32.809: INFO: created pod pod-service-account-mountsa-nomountspec Apr 22 22:13:32.809: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 22 22:13:32.836: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 22 22:13:32.836: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:13:32.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3538" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":238,"skipped":3863,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:13:33.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin Apr 22 22:13:33.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3371 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Apr 22 22:13:44.014: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0422 22:13:43.941876 3662 log.go:172] (0xc0009fec60) (0xc0007d0280) Create stream\nI0422 22:13:43.941924 3662 log.go:172] (0xc0009fec60) (0xc0007d0280) Stream added, broadcasting: 1\nI0422 22:13:43.944023 3662 log.go:172] (0xc0009fec60) Reply frame received for 1\nI0422 22:13:43.944063 3662 log.go:172] (0xc0009fec60) (0xc0007d8000) Create stream\nI0422 22:13:43.944073 3662 log.go:172] (0xc0009fec60) (0xc0007d8000) Stream added, broadcasting: 3\nI0422 22:13:43.944732 3662 log.go:172] (0xc0009fec60) Reply frame received for 3\nI0422 22:13:43.944763 3662 log.go:172] (0xc0009fec60) (0xc0007d0320) Create stream\nI0422 22:13:43.944772 3662 log.go:172] (0xc0009fec60) (0xc0007d0320) Stream added, broadcasting: 5\nI0422 22:13:43.945525 3662 log.go:172] (0xc0009fec60) Reply frame received for 5\nI0422 22:13:43.945566 3662 log.go:172] (0xc0009fec60) (0xc0007d03c0) Create stream\nI0422 22:13:43.945578 3662 log.go:172] (0xc0009fec60) (0xc0007d03c0) Stream added, broadcasting: 7\nI0422 22:13:43.946194 3662 log.go:172] (0xc0009fec60) Reply frame received for 7\nI0422 22:13:43.946340 3662 log.go:172] (0xc0007d8000) (3) Writing data frame\nI0422 22:13:43.946431 3662 log.go:172] (0xc0007d8000) (3) Writing data frame\nI0422 22:13:43.947054 3662 log.go:172] (0xc0009fec60) Data frame received for 5\nI0422 22:13:43.947072 3662 log.go:172] (0xc0007d0320) (5) Data frame handling\nI0422 22:13:43.947095 3662 log.go:172] (0xc0007d0320) (5) Data frame sent\nI0422 22:13:43.947514 3662 log.go:172] (0xc0009fec60) Data frame received for 5\nI0422 22:13:43.947527 3662 log.go:172] (0xc0007d0320) (5) Data frame handling\nI0422 22:13:43.947538 3662 log.go:172] (0xc0007d0320) (5) Data frame sent\nI0422 22:13:43.990687 3662 log.go:172] (0xc0009fec60) Data frame received for 5\nI0422 22:13:43.990724 3662 log.go:172] (0xc0009fec60) Data frame received for 7\nI0422 22:13:43.990757 3662 log.go:172] (0xc0007d03c0) (7) Data frame handling\nI0422 22:13:43.990782 3662 log.go:172] (0xc0007d0320) (5) Data frame handling\nI0422 22:13:43.993879 3662 log.go:172] (0xc0009fec60) Data frame received for 1\nI0422 22:13:43.993896 3662 log.go:172] (0xc0007d0280) (1) Data frame handling\nI0422 22:13:43.993903 3662 log.go:172] (0xc0007d0280) (1) Data frame sent\nI0422 22:13:43.993916 3662 log.go:172] (0xc0009fec60) (0xc0007d0280) Stream removed, broadcasting: 1\nI0422 22:13:43.994127 3662 log.go:172] (0xc0009fec60) (0xc0007d8000) Stream removed, broadcasting: 3\nI0422 22:13:43.994175 3662 log.go:172] (0xc0009fec60) Go away received\nI0422 22:13:43.994222 3662 log.go:172] (0xc0009fec60) (0xc0007d0280) Stream removed, broadcasting: 1\nI0422 22:13:43.994241 3662 log.go:172] (0xc0009fec60) (0xc0007d8000) Stream removed, broadcasting: 3\nI0422 22:13:43.994247 3662 log.go:172] (0xc0009fec60) (0xc0007d0320) Stream removed, broadcasting: 5\nI0422 22:13:43.994253 3662 log.go:172] (0xc0009fec60) (0xc0007d03c0) Stream removed, broadcasting: 7\n" Apr 22 22:13:44.014: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:13:46.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3371" for this suite. • [SLOW TEST:12.973 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":239,"skipped":3867,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:13:46.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 22 22:13:46.115: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Apr 22 22:13:48.151: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:13:49.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-153" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":240,"skipped":3879,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:13:49.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-7c7c20a2-4a5d-480d-905e-38c9ed3e821a STEP: Creating a pod to test consume secrets Apr 22 22:13:50.341: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5995ae40-0895-4a65-ad5a-eaa3094962e5" in namespace "projected-8756" to be "success or failure" Apr 22 22:13:50.545: INFO: Pod "pod-projected-secrets-5995ae40-0895-4a65-ad5a-eaa3094962e5": Phase="Pending", Reason="", readiness=false. Elapsed: 203.706299ms Apr 22 22:13:52.548: INFO: Pod "pod-projected-secrets-5995ae40-0895-4a65-ad5a-eaa3094962e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207122595s Apr 22 22:13:54.566: INFO: Pod "pod-projected-secrets-5995ae40-0895-4a65-ad5a-eaa3094962e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.224888254s STEP: Saw pod success Apr 22 22:13:54.566: INFO: Pod "pod-projected-secrets-5995ae40-0895-4a65-ad5a-eaa3094962e5" satisfied condition "success or failure" Apr 22 22:13:54.575: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-5995ae40-0895-4a65-ad5a-eaa3094962e5 container projected-secret-volume-test: STEP: delete the pod Apr 22 22:13:54.599: INFO: Waiting for pod pod-projected-secrets-5995ae40-0895-4a65-ad5a-eaa3094962e5 to disappear Apr 22 22:13:54.625: INFO: Pod pod-projected-secrets-5995ae40-0895-4a65-ad5a-eaa3094962e5 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:13:54.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8756" for this suite. • [SLOW TEST:5.185 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":241,"skipped":3894,"failed":0} SSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:13:54.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 22 22:13:54.742: INFO: Waiting up to 5m0s for pod "downward-api-fd1cb4ec-6a31-4a1f-bda9-2f02ad46b30f" in namespace "downward-api-3288" to be "success or failure" Apr 22 22:13:54.791: INFO: Pod "downward-api-fd1cb4ec-6a31-4a1f-bda9-2f02ad46b30f": Phase="Pending", Reason="", readiness=false. Elapsed: 48.712534ms Apr 22 22:13:56.802: INFO: Pod "downward-api-fd1cb4ec-6a31-4a1f-bda9-2f02ad46b30f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059983812s Apr 22 22:13:58.807: INFO: Pod "downward-api-fd1cb4ec-6a31-4a1f-bda9-2f02ad46b30f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064890138s STEP: Saw pod success Apr 22 22:13:58.807: INFO: Pod "downward-api-fd1cb4ec-6a31-4a1f-bda9-2f02ad46b30f" satisfied condition "success or failure" Apr 22 22:13:58.810: INFO: Trying to get logs from node jerma-worker2 pod downward-api-fd1cb4ec-6a31-4a1f-bda9-2f02ad46b30f container dapi-container: STEP: delete the pod Apr 22 22:13:58.857: INFO: Waiting for pod downward-api-fd1cb4ec-6a31-4a1f-bda9-2f02ad46b30f to disappear Apr 22 22:13:58.874: INFO: Pod downward-api-fd1cb4ec-6a31-4a1f-bda9-2f02ad46b30f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:13:58.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3288" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":242,"skipped":3898,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:13:58.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 STEP: creating the pod Apr 22 22:13:58.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-471' Apr 22 22:13:59.180: INFO: stderr: "" Apr 22 22:13:59.180: INFO: stdout: "pod/pause created\n" Apr 22 22:13:59.180: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 22 22:13:59.180: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-471" to be "running and ready" Apr 22 22:13:59.210: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 29.586625ms Apr 22 22:14:01.228: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047218706s Apr 22 22:14:03.232: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.051589029s Apr 22 22:14:03.232: INFO: Pod "pause" satisfied condition "running and ready" Apr 22 22:14:03.232: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod Apr 22 22:14:03.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-471' Apr 22 22:14:03.573: INFO: stderr: "" Apr 22 22:14:03.574: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Apr 22 22:14:03.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-471' Apr 22 22:14:03.665: INFO: stderr: "" Apr 22 22:14:03.665: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Apr 22 22:14:03.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-471' Apr 22 22:14:03.767: INFO: stderr: "" Apr 22 22:14:03.767: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Apr 22 22:14:03.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-471' Apr 22 22:14:03.862: INFO: stderr: "" Apr 22 22:14:03.862: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1282 STEP: using delete to clean up resources Apr 22 22:14:03.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-471' Apr 22 22:14:04.014: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 22 22:14:04.014: INFO: stdout: "pod \"pause\" force deleted\n" Apr 22 22:14:04.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-471' Apr 22 22:14:04.194: INFO: stderr: "No resources found in kubectl-471 namespace.\n" Apr 22 22:14:04.194: INFO: stdout: "" Apr 22 22:14:04.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-471 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 22 22:14:04.321: INFO: stderr: "" Apr 22 22:14:04.322: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:14:04.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-471" for this suite. • [SLOW TEST:5.561 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1272 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":243,"skipped":3952,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:14:04.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Apr 22 22:14:04.831: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. Apr 22 22:14:05.507: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Apr 22 22:14:08.522: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723190445, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723190445, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723190445, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723190445, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 22:14:11.152: INFO: Waited 620.763044ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:14:11.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-8289" for this suite. • [SLOW TEST:7.261 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":244,"skipped":3965,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:14:11.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-6624 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6624 to expose endpoints map[] Apr 22 22:14:13.383: INFO: Get endpoints failed (3.759924ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Apr 22 22:14:14.387: INFO: successfully validated that service endpoint-test2 in namespace services-6624 exposes endpoints map[] (1.007719815s elapsed) STEP: Creating pod pod1 in namespace services-6624 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6624 to expose endpoints map[pod1:[80]] Apr 22 22:14:18.597: INFO: successfully validated that service endpoint-test2 in namespace services-6624 exposes endpoints map[pod1:[80]] (4.203164105s elapsed) STEP: Creating pod pod2 in namespace services-6624 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6624 to expose endpoints map[pod1:[80] pod2:[80]] Apr 22 22:14:23.336: INFO: successfully validated that service endpoint-test2 in namespace services-6624 exposes endpoints map[pod1:[80] pod2:[80]] (4.734173521s elapsed) STEP: Deleting pod pod1 in namespace services-6624 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6624 to expose endpoints map[pod2:[80]] Apr 22 22:14:24.632: INFO: successfully validated that service endpoint-test2 in namespace services-6624 exposes endpoints map[pod2:[80]] (1.29238273s elapsed) STEP: Deleting pod pod2 in namespace services-6624 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6624 to expose endpoints map[] Apr 22 22:14:25.923: INFO: successfully validated that service endpoint-test2 in namespace services-6624 exposes endpoints map[] (1.095713771s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:14:26.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6624" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:14.763 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":245,"skipped":3983,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:14:26.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Apr 22 22:14:35.750: INFO: 7 pods remaining Apr 22 22:14:35.750: INFO: 0 pods has nil DeletionTimestamp Apr 22 22:14:35.750: INFO: Apr 22 22:14:37.612: INFO: 0 pods remaining Apr 22 22:14:37.613: INFO: 0 pods has nil DeletionTimestamp Apr 22 22:14:37.613: INFO: STEP: Gathering metrics W0422 22:14:38.804495 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 22 22:14:38.804: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:14:38.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3138" for this suite. • [SLOW TEST:12.602 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":246,"skipped":4013,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:14:39.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-2af934ff-d793-4671-bbc6-3973a6bc6b87 STEP: Creating a pod to test consume configMaps Apr 22 22:14:40.258: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-424f081e-cc65-4efd-901d-f907dd0b221a" in namespace "projected-5092" to be "success or failure" Apr 22 22:14:40.576: INFO: Pod "pod-projected-configmaps-424f081e-cc65-4efd-901d-f907dd0b221a": Phase="Pending", Reason="", readiness=false. Elapsed: 317.901512ms Apr 22 22:14:42.581: INFO: Pod "pod-projected-configmaps-424f081e-cc65-4efd-901d-f907dd0b221a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.322830391s Apr 22 22:14:45.034: INFO: Pod "pod-projected-configmaps-424f081e-cc65-4efd-901d-f907dd0b221a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.775585367s STEP: Saw pod success Apr 22 22:14:45.034: INFO: Pod "pod-projected-configmaps-424f081e-cc65-4efd-901d-f907dd0b221a" satisfied condition "success or failure" Apr 22 22:14:45.594: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-424f081e-cc65-4efd-901d-f907dd0b221a container projected-configmap-volume-test: STEP: delete the pod Apr 22 22:14:45.802: INFO: Waiting for pod pod-projected-configmaps-424f081e-cc65-4efd-901d-f907dd0b221a to disappear Apr 22 22:14:45.808: INFO: Pod pod-projected-configmaps-424f081e-cc65-4efd-901d-f907dd0b221a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:14:45.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5092" for this suite. • [SLOW TEST:6.743 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":247,"skipped":4015,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:14:45.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 22 22:14:45.897: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 22 22:14:45.953: INFO: Waiting for terminating namespaces to be deleted... Apr 22 22:14:45.972: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 22 22:14:45.986: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 22 22:14:45.986: INFO: Container kindnet-cni ready: true, restart count 0 Apr 22 22:14:45.986: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 22 22:14:45.986: INFO: Container kube-proxy ready: true, restart count 0 Apr 22 22:14:45.986: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 22 22:14:46.003: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 22 22:14:46.003: INFO: Container kube-hunter ready: false, restart count 0 Apr 22 22:14:46.003: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 22 22:14:46.003: INFO: Container kube-bench ready: false, restart count 0 Apr 22 22:14:46.003: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 22 22:14:46.003: INFO: Container kindnet-cni ready: true, restart count 0 Apr 22 22:14:46.003: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 22 22:14:46.003: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1608440bc81e70b9], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:14:47.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6299" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":248,"skipped":4038,"failed":0} ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:14:47.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Apr 22 22:14:47.141: INFO: >>> kubeConfig: /root/.kube/config Apr 22 22:14:50.122: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:15:00.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6436" for this suite. • [SLOW TEST:13.684 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":249,"skipped":4038,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:15:00.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-2951 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 22 22:15:00.808: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 22 22:15:22.923: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.43:8080/dial?request=hostname&protocol=udp&host=10.244.1.181&port=8081&tries=1'] Namespace:pod-network-test-2951 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 22 22:15:22.923: INFO: >>> kubeConfig: /root/.kube/config I0422 22:15:22.958918 6 log.go:172] (0xc003420420) (0xc001e26aa0) Create stream I0422 22:15:22.958971 6 log.go:172] (0xc003420420) (0xc001e26aa0) Stream added, broadcasting: 1 I0422 22:15:22.961480 6 log.go:172] (0xc003420420) Reply frame received for 1 I0422 22:15:22.961533 6 log.go:172] (0xc003420420) (0xc001a6cfa0) Create stream I0422 22:15:22.961551 6 log.go:172] (0xc003420420) (0xc001a6cfa0) Stream added, broadcasting: 3 I0422 22:15:22.962579 6 log.go:172] (0xc003420420) Reply frame received for 3 I0422 22:15:22.962640 6 log.go:172] (0xc003420420) (0xc002758280) Create stream I0422 22:15:22.962667 6 log.go:172] (0xc003420420) (0xc002758280) Stream added, broadcasting: 5 I0422 22:15:22.963969 6 log.go:172] (0xc003420420) Reply frame received for 5 I0422 22:15:23.055214 6 log.go:172] (0xc003420420) Data frame received for 3 I0422 22:15:23.055249 6 log.go:172] (0xc001a6cfa0) (3) Data frame handling I0422 22:15:23.055270 6 log.go:172] (0xc001a6cfa0) (3) Data frame sent I0422 22:15:23.056133 6 log.go:172] (0xc003420420) Data frame received for 3 I0422 22:15:23.056159 6 log.go:172] (0xc001a6cfa0) (3) Data frame handling I0422 22:15:23.056401 6 log.go:172] (0xc003420420) Data frame received for 5 I0422 22:15:23.056424 6 log.go:172] (0xc002758280) (5) Data frame handling I0422 22:15:23.064804 6 log.go:172] (0xc003420420) Data frame received for 1 I0422 22:15:23.064871 6 log.go:172] (0xc001e26aa0) (1) Data frame handling I0422 22:15:23.064934 6 log.go:172] (0xc001e26aa0) (1) Data frame sent I0422 22:15:23.065008 6 log.go:172] (0xc003420420) (0xc001e26aa0) Stream removed, broadcasting: 1 I0422 22:15:23.065071 6 log.go:172] (0xc003420420) Go away received I0422 22:15:23.065334 6 log.go:172] (0xc003420420) (0xc001e26aa0) Stream removed, broadcasting: 1 I0422 22:15:23.065407 6 log.go:172] (0xc003420420) (0xc001a6cfa0) Stream removed, broadcasting: 3 I0422 22:15:23.065452 6 log.go:172] (0xc003420420) (0xc002758280) Stream removed, broadcasting: 5 Apr 22 22:15:23.065: INFO: Waiting for responses: map[] Apr 22 22:15:23.069: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.43:8080/dial?request=hostname&protocol=udp&host=10.244.2.42&port=8081&tries=1'] Namespace:pod-network-test-2951 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 22 22:15:23.069: INFO: >>> kubeConfig: /root/.kube/config I0422 22:15:23.093445 6 log.go:172] (0xc00390a4d0) (0xc002758a00) Create stream I0422 22:15:23.093474 6 log.go:172] (0xc00390a4d0) (0xc002758a00) Stream added, broadcasting: 1 I0422 22:15:23.095126 6 log.go:172] (0xc00390a4d0) Reply frame received for 1 I0422 22:15:23.095162 6 log.go:172] (0xc00390a4d0) (0xc0023daa00) Create stream I0422 22:15:23.095173 6 log.go:172] (0xc00390a4d0) (0xc0023daa00) Stream added, broadcasting: 3 I0422 22:15:23.095903 6 log.go:172] (0xc00390a4d0) Reply frame received for 3 I0422 22:15:23.095934 6 log.go:172] (0xc00390a4d0) (0xc0023dac80) Create stream I0422 22:15:23.095946 6 log.go:172] (0xc00390a4d0) (0xc0023dac80) Stream added, broadcasting: 5 I0422 22:15:23.096834 6 log.go:172] (0xc00390a4d0) Reply frame received for 5 I0422 22:15:23.157627 6 log.go:172] (0xc00390a4d0) Data frame received for 3 I0422 22:15:23.157651 6 log.go:172] (0xc0023daa00) (3) Data frame handling I0422 22:15:23.157659 6 log.go:172] (0xc0023daa00) (3) Data frame sent I0422 22:15:23.158600 6 log.go:172] (0xc00390a4d0) Data frame received for 3 I0422 22:15:23.158695 6 log.go:172] (0xc0023daa00) (3) Data frame handling I0422 22:15:23.158729 6 log.go:172] (0xc00390a4d0) Data frame received for 5 I0422 22:15:23.158762 6 log.go:172] (0xc0023dac80) (5) Data frame handling I0422 22:15:23.160772 6 log.go:172] (0xc00390a4d0) Data frame received for 1 I0422 22:15:23.160799 6 log.go:172] (0xc002758a00) (1) Data frame handling I0422 22:15:23.160837 6 log.go:172] (0xc002758a00) (1) Data frame sent I0422 22:15:23.160856 6 log.go:172] (0xc00390a4d0) (0xc002758a00) Stream removed, broadcasting: 1 I0422 22:15:23.160873 6 log.go:172] (0xc00390a4d0) Go away received I0422 22:15:23.161334 6 log.go:172] (0xc00390a4d0) (0xc002758a00) Stream removed, broadcasting: 1 I0422 22:15:23.161354 6 log.go:172] (0xc00390a4d0) (0xc0023daa00) Stream removed, broadcasting: 3 I0422 22:15:23.161365 6 log.go:172] (0xc00390a4d0) (0xc0023dac80) Stream removed, broadcasting: 5 Apr 22 22:15:23.161: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:15:23.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2951" for this suite. • [SLOW TEST:22.434 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":4073,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:15:23.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Apr 22 22:15:27.273: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Apr 22 22:15:42.386: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:15:42.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5721" for this suite. • [SLOW TEST:19.229 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":251,"skipped":4096,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:15:42.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Apr 22 22:15:47.031: INFO: Successfully updated pod "annotationupdatef50c80fc-dffe-4ad9-9eb6-1040bd8101cd" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:15:49.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6622" for this suite. • [SLOW TEST:6.661 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":252,"skipped":4119,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:15:49.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments Apr 22 22:15:49.141: INFO: Waiting up to 5m0s for pod "client-containers-d94b3809-e4fe-4334-ad0a-c2cf52d1474f" in namespace "containers-5008" to be "success or failure" Apr 22 22:15:49.169: INFO: Pod "client-containers-d94b3809-e4fe-4334-ad0a-c2cf52d1474f": Phase="Pending", Reason="", readiness=false. Elapsed: 28.453447ms Apr 22 22:15:51.201: INFO: Pod "client-containers-d94b3809-e4fe-4334-ad0a-c2cf52d1474f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060487245s Apr 22 22:15:53.205: INFO: Pod "client-containers-d94b3809-e4fe-4334-ad0a-c2cf52d1474f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064434773s STEP: Saw pod success Apr 22 22:15:53.205: INFO: Pod "client-containers-d94b3809-e4fe-4334-ad0a-c2cf52d1474f" satisfied condition "success or failure" Apr 22 22:15:53.208: INFO: Trying to get logs from node jerma-worker pod client-containers-d94b3809-e4fe-4334-ad0a-c2cf52d1474f container test-container: STEP: delete the pod Apr 22 22:15:53.298: INFO: Waiting for pod client-containers-d94b3809-e4fe-4334-ad0a-c2cf52d1474f to disappear Apr 22 22:15:53.351: INFO: Pod client-containers-d94b3809-e4fe-4334-ad0a-c2cf52d1474f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:15:53.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5008" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":253,"skipped":4144,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:15:53.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 22 22:15:53.543: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:15:53.560: INFO: Number of nodes with available pods: 0 Apr 22 22:15:53.560: INFO: Node jerma-worker is running more than one daemon pod Apr 22 22:15:54.564: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:15:54.566: INFO: Number of nodes with available pods: 0 Apr 22 22:15:54.566: INFO: Node jerma-worker is running more than one daemon pod Apr 22 22:15:55.566: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:15:55.569: INFO: Number of nodes with available pods: 0 Apr 22 22:15:55.569: INFO: Node jerma-worker is running more than one daemon pod Apr 22 22:15:56.614: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:15:56.671: INFO: Number of nodes with available pods: 0 Apr 22 22:15:56.671: INFO: Node jerma-worker is running more than one daemon pod Apr 22 22:15:57.565: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:15:57.568: INFO: Number of nodes with available pods: 2 Apr 22 22:15:57.568: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 22 22:15:57.594: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:15:57.597: INFO: Number of nodes with available pods: 1 Apr 22 22:15:57.597: INFO: Node jerma-worker2 is running more than one daemon pod Apr 22 22:15:58.601: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:15:58.604: INFO: Number of nodes with available pods: 1 Apr 22 22:15:58.604: INFO: Node jerma-worker2 is running more than one daemon pod Apr 22 22:15:59.734: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:15:59.738: INFO: Number of nodes with available pods: 1 Apr 22 22:15:59.738: INFO: Node jerma-worker2 is running more than one daemon pod Apr 22 22:16:00.705: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:16:00.709: INFO: Number of nodes with available pods: 1 Apr 22 22:16:00.709: INFO: Node jerma-worker2 is running more than one daemon pod Apr 22 22:16:01.602: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:16:01.606: INFO: Number of nodes with available pods: 1 Apr 22 22:16:01.606: INFO: Node jerma-worker2 is running more than one daemon pod Apr 22 22:16:02.602: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:16:02.606: INFO: Number of nodes with available pods: 1 Apr 22 22:16:02.606: INFO: Node jerma-worker2 is running more than one daemon pod Apr 22 22:16:03.626: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:16:03.629: INFO: Number of nodes with available pods: 1 Apr 22 22:16:03.629: INFO: Node jerma-worker2 is running more than one daemon pod Apr 22 22:16:04.602: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:16:04.606: INFO: Number of nodes with available pods: 1 Apr 22 22:16:04.606: INFO: Node jerma-worker2 is running more than one daemon pod Apr 22 22:16:05.602: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:16:05.606: INFO: Number of nodes with available pods: 1 Apr 22 22:16:05.606: INFO: Node jerma-worker2 is running more than one daemon pod Apr 22 22:16:07.249: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:16:07.284: INFO: Number of nodes with available pods: 1 Apr 22 22:16:07.284: INFO: Node jerma-worker2 is running more than one daemon pod Apr 22 22:16:07.602: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:16:07.605: INFO: Number of nodes with available pods: 1 Apr 22 22:16:07.605: INFO: Node jerma-worker2 is running more than one daemon pod Apr 22 22:16:08.602: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:16:08.606: INFO: Number of nodes with available pods: 1 Apr 22 22:16:08.606: INFO: Node jerma-worker2 is running more than one daemon pod Apr 22 22:16:09.614: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:16:09.619: INFO: Number of nodes with available pods: 1 Apr 22 22:16:09.619: INFO: Node jerma-worker2 is running more than one daemon pod Apr 22 22:16:10.601: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:16:10.605: INFO: Number of nodes with available pods: 1 Apr 22 22:16:10.605: INFO: Node jerma-worker2 is running more than one daemon pod Apr 22 22:16:11.601: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:16:11.604: INFO: Number of nodes with available pods: 1 Apr 22 22:16:11.604: INFO: Node jerma-worker2 is running more than one daemon pod Apr 22 22:16:12.602: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:16:12.606: INFO: Number of nodes with available pods: 1 Apr 22 22:16:12.606: INFO: Node jerma-worker2 is running more than one daemon pod Apr 22 22:16:13.603: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:16:13.607: INFO: Number of nodes with available pods: 2 Apr 22 22:16:13.607: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4908, will wait for the garbage collector to delete the pods Apr 22 22:16:13.669: INFO: Deleting DaemonSet.extensions daemon-set took: 7.290951ms Apr 22 22:16:13.970: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.244077ms Apr 22 22:16:19.601: INFO: Number of nodes with available pods: 0 Apr 22 22:16:19.601: INFO: Number of running nodes: 0, number of available pods: 0 Apr 22 22:16:19.604: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4908/daemonsets","resourceVersion":"10239776"},"items":null} Apr 22 22:16:19.606: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4908/pods","resourceVersion":"10239776"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:16:19.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4908" for this suite. • [SLOW TEST:26.270 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":254,"skipped":4155,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:16:19.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:16:32.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5567" for this suite. • [SLOW TEST:13.200 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":255,"skipped":4176,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:16:32.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:16:48.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2158" for this suite. • [SLOW TEST:16.165 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":256,"skipped":4187,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:16:48.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 22 22:16:53.606: INFO: Successfully updated pod "pod-update-03b05002-d701-47fa-bd8d-06d5d35ab207" STEP: verifying the updated pod is in kubernetes Apr 22 22:16:53.631: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:16:53.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4554" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":257,"skipped":4216,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:16:53.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1626 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 22 22:16:53.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-3043' Apr 22 22:16:57.265: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 22 22:16:57.265: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1631 Apr 22 22:17:01.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-3043' Apr 22 22:17:01.460: INFO: stderr: "" Apr 22 22:17:01.460: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:17:01.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3043" for this suite. • [SLOW TEST:7.828 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1622 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":258,"skipped":4221,"failed":0} SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:17:01.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 22 22:17:01.576: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 22 22:17:01.591: INFO: Waiting for terminating namespaces to be deleted... Apr 22 22:17:01.593: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 22 22:17:01.599: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 22 22:17:01.599: INFO: Container kindnet-cni ready: true, restart count 0 Apr 22 22:17:01.599: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 22 22:17:01.599: INFO: Container kube-proxy ready: true, restart count 0 Apr 22 22:17:01.599: INFO: e2e-test-httpd-deployment-594dddd44f-d29d4 from kubectl-3043 started at 2020-04-22 22:16:57 +0000 UTC (1 container statuses recorded) Apr 22 22:17:01.599: INFO: Container e2e-test-httpd-deployment ready: true, restart count 0 Apr 22 22:17:01.599: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 22 22:17:01.606: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 22 22:17:01.606: INFO: Container kindnet-cni ready: true, restart count 0 Apr 22 22:17:01.606: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 22 22:17:01.606: INFO: Container kube-bench ready: false, restart count 0 Apr 22 22:17:01.606: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 22 22:17:01.606: INFO: Container kube-proxy ready: true, restart count 0 Apr 22 22:17:01.606: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 22 22:17:01.606: INFO: Container kube-hunter ready: false, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 Apr 22 22:17:01.670: INFO: Pod kindnet-c5svj requesting resource cpu=100m on Node jerma-worker Apr 22 22:17:01.670: INFO: Pod kindnet-zk6sq requesting resource cpu=100m on Node jerma-worker2 Apr 22 22:17:01.670: INFO: Pod kube-proxy-44mlz requesting resource cpu=0m on Node jerma-worker Apr 22 22:17:01.670: INFO: Pod kube-proxy-75q42 requesting resource cpu=0m on Node jerma-worker2 Apr 22 22:17:01.670: INFO: Pod e2e-test-httpd-deployment-594dddd44f-d29d4 requesting resource cpu=0m on Node jerma-worker STEP: Starting Pods to consume most of the cluster CPU. Apr 22 22:17:01.670: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker Apr 22 22:17:01.701: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-4de03e32-5bf4-4359-9068-d8709936d4b4.1608442b5ec05098], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5880/filler-pod-4de03e32-5bf4-4359-9068-d8709936d4b4 to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-4de03e32-5bf4-4359-9068-d8709936d4b4.1608442bc065b2f7], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-4de03e32-5bf4-4359-9068-d8709936d4b4.1608442bf1257dc5], Reason = [Created], Message = [Created container filler-pod-4de03e32-5bf4-4359-9068-d8709936d4b4] STEP: Considering event: Type = [Normal], Name = [filler-pod-4de03e32-5bf4-4359-9068-d8709936d4b4.1608442c05ecc226], Reason = [Started], Message = [Started container filler-pod-4de03e32-5bf4-4359-9068-d8709936d4b4] STEP: Considering event: Type = [Normal], Name = [filler-pod-5779486f-2b0a-4847-be3e-0da2ceb77664.1608442b608cc7b9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5880/filler-pod-5779486f-2b0a-4847-be3e-0da2ceb77664 to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-5779486f-2b0a-4847-be3e-0da2ceb77664.1608442bc55875e2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-5779486f-2b0a-4847-be3e-0da2ceb77664.1608442bffbed92b], Reason = [Created], Message = [Created container filler-pod-5779486f-2b0a-4847-be3e-0da2ceb77664] STEP: Considering event: Type = [Normal], Name = [filler-pod-5779486f-2b0a-4847-be3e-0da2ceb77664.1608442c11103e8c], Reason = [Started], Message = [Started container filler-pod-5779486f-2b0a-4847-be3e-0da2ceb77664] STEP: Considering event: Type = [Warning], Name = [additional-pod.1608442c50f779a2], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:17:06.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5880" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:5.441 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":259,"skipped":4223,"failed":0} [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:17:06.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:17:21.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-657" for this suite. • [SLOW TEST:14.136 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":260,"skipped":4223,"failed":0} SSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:17:21.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4194.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4194.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 22 22:17:27.218: INFO: DNS probes using dns-4194/dns-test-a8f11d89-9a25-4b7d-81f2-308f36300c0d succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:17:27.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4194" for this suite. • [SLOW TEST:6.231 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":261,"skipped":4226,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:17:27.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-m7f4 STEP: Creating a pod to test atomic-volume-subpath Apr 22 22:17:27.661: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-m7f4" in namespace "subpath-3075" to be "success or failure" Apr 22 22:17:27.670: INFO: Pod "pod-subpath-test-configmap-m7f4": Phase="Pending", Reason="", readiness=false. Elapsed: 9.078293ms Apr 22 22:17:29.674: INFO: Pod "pod-subpath-test-configmap-m7f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013092465s Apr 22 22:17:31.678: INFO: Pod "pod-subpath-test-configmap-m7f4": Phase="Running", Reason="", readiness=true. Elapsed: 4.017257517s Apr 22 22:17:33.683: INFO: Pod "pod-subpath-test-configmap-m7f4": Phase="Running", Reason="", readiness=true. Elapsed: 6.021377636s Apr 22 22:17:35.686: INFO: Pod "pod-subpath-test-configmap-m7f4": Phase="Running", Reason="", readiness=true. Elapsed: 8.024744709s Apr 22 22:17:37.690: INFO: Pod "pod-subpath-test-configmap-m7f4": Phase="Running", Reason="", readiness=true. Elapsed: 10.029051147s Apr 22 22:17:39.695: INFO: Pod "pod-subpath-test-configmap-m7f4": Phase="Running", Reason="", readiness=true. Elapsed: 12.033466312s Apr 22 22:17:41.699: INFO: Pod "pod-subpath-test-configmap-m7f4": Phase="Running", Reason="", readiness=true. Elapsed: 14.037431701s Apr 22 22:17:43.702: INFO: Pod "pod-subpath-test-configmap-m7f4": Phase="Running", Reason="", readiness=true. Elapsed: 16.041198107s Apr 22 22:17:45.706: INFO: Pod "pod-subpath-test-configmap-m7f4": Phase="Running", Reason="", readiness=true. Elapsed: 18.04497479s Apr 22 22:17:47.710: INFO: Pod "pod-subpath-test-configmap-m7f4": Phase="Running", Reason="", readiness=true. Elapsed: 20.048572145s Apr 22 22:17:49.714: INFO: Pod "pod-subpath-test-configmap-m7f4": Phase="Running", Reason="", readiness=true. Elapsed: 22.052506934s Apr 22 22:17:51.718: INFO: Pod "pod-subpath-test-configmap-m7f4": Phase="Running", Reason="", readiness=true. Elapsed: 24.056397811s Apr 22 22:17:53.721: INFO: Pod "pod-subpath-test-configmap-m7f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.059894417s STEP: Saw pod success Apr 22 22:17:53.721: INFO: Pod "pod-subpath-test-configmap-m7f4" satisfied condition "success or failure" Apr 22 22:17:53.723: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-m7f4 container test-container-subpath-configmap-m7f4: STEP: delete the pod Apr 22 22:17:53.743: INFO: Waiting for pod pod-subpath-test-configmap-m7f4 to disappear Apr 22 22:17:53.747: INFO: Pod pod-subpath-test-configmap-m7f4 no longer exists STEP: Deleting pod pod-subpath-test-configmap-m7f4 Apr 22 22:17:53.747: INFO: Deleting pod "pod-subpath-test-configmap-m7f4" in namespace "subpath-3075" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:17:53.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3075" for this suite. • [SLOW TEST:26.481 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":262,"skipped":4254,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:17:53.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 22 22:17:53.827: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-78990561-dabe-4e18-be15-b5bf6b0710d3" in namespace "security-context-test-823" to be "success or failure" Apr 22 22:17:53.842: INFO: Pod "busybox-readonly-false-78990561-dabe-4e18-be15-b5bf6b0710d3": Phase="Pending", Reason="", readiness=false. Elapsed: 14.707635ms Apr 22 22:17:55.845: INFO: Pod "busybox-readonly-false-78990561-dabe-4e18-be15-b5bf6b0710d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018234441s Apr 22 22:17:57.850: INFO: Pod "busybox-readonly-false-78990561-dabe-4e18-be15-b5bf6b0710d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022425401s Apr 22 22:17:57.850: INFO: Pod "busybox-readonly-false-78990561-dabe-4e18-be15-b5bf6b0710d3" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:17:57.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-823" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4292,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:17:57.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 22 22:18:06.032: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 22 22:18:06.058: INFO: Pod pod-with-poststart-exec-hook still exists Apr 22 22:18:08.058: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 22 22:18:08.061: INFO: Pod pod-with-poststart-exec-hook still exists Apr 22 22:18:10.058: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 22 22:18:10.062: INFO: Pod pod-with-poststart-exec-hook still exists Apr 22 22:18:12.058: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 22 22:18:12.062: INFO: Pod pod-with-poststart-exec-hook still exists Apr 22 22:18:14.058: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 22 22:18:14.065: INFO: Pod pod-with-poststart-exec-hook still exists Apr 22 22:18:16.058: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 22 22:18:16.062: INFO: Pod pod-with-poststart-exec-hook still exists Apr 22 22:18:18.058: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 22 22:18:18.062: INFO: Pod pod-with-poststart-exec-hook still exists Apr 22 22:18:20.058: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 22 22:18:20.062: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:18:20.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1187" for this suite. • [SLOW TEST:22.210 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":264,"skipped":4300,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:18:20.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:18:25.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7425" for this suite. • [SLOW TEST:5.170 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":265,"skipped":4316,"failed":0} SSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:18:25.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-9612 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-9612 STEP: Deleting pre-stop pod Apr 22 22:18:38.620: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:18:38.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-9612" for this suite. • [SLOW TEST:13.402 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":266,"skipped":4324,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:18:38.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:18:54.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5503" for this suite. • [SLOW TEST:16.305 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":267,"skipped":4341,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:18:54.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Apr 22 22:18:55.063: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8827 /api/v1/namespaces/watch-8827/configmaps/e2e-watch-test-label-changed 513283e6-98f6-456a-9dca-fb939b2dafdd 10240758 0 2020-04-22 22:18:55 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 22 22:18:55.063: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8827 /api/v1/namespaces/watch-8827/configmaps/e2e-watch-test-label-changed 513283e6-98f6-456a-9dca-fb939b2dafdd 10240759 0 2020-04-22 22:18:55 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 22 22:18:55.064: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8827 /api/v1/namespaces/watch-8827/configmaps/e2e-watch-test-label-changed 513283e6-98f6-456a-9dca-fb939b2dafdd 10240760 0 2020-04-22 22:18:55 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Apr 22 22:19:05.111: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8827 /api/v1/namespaces/watch-8827/configmaps/e2e-watch-test-label-changed 513283e6-98f6-456a-9dca-fb939b2dafdd 10240808 0 2020-04-22 22:18:55 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 22 22:19:05.111: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8827 /api/v1/namespaces/watch-8827/configmaps/e2e-watch-test-label-changed 513283e6-98f6-456a-9dca-fb939b2dafdd 10240809 0 2020-04-22 22:18:55 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Apr 22 22:19:05.111: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8827 /api/v1/namespaces/watch-8827/configmaps/e2e-watch-test-label-changed 513283e6-98f6-456a-9dca-fb939b2dafdd 10240810 0 2020-04-22 22:18:55 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:19:05.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8827" for this suite. • [SLOW TEST:10.176 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":268,"skipped":4358,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:19:05.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 22 22:19:09.235: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:19:09.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3319" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4375,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:19:09.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 22 22:19:09.364: INFO: Waiting up to 5m0s for pod "downwardapi-volume-66ceb2df-f7e3-4818-8d9d-5ed7c48ccf99" in namespace "projected-1422" to be "success or failure" Apr 22 22:19:09.367: INFO: Pod "downwardapi-volume-66ceb2df-f7e3-4818-8d9d-5ed7c48ccf99": Phase="Pending", Reason="", readiness=false. Elapsed: 3.163083ms Apr 22 22:19:11.371: INFO: Pod "downwardapi-volume-66ceb2df-f7e3-4818-8d9d-5ed7c48ccf99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007432197s Apr 22 22:19:13.375: INFO: Pod "downwardapi-volume-66ceb2df-f7e3-4818-8d9d-5ed7c48ccf99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011373101s STEP: Saw pod success Apr 22 22:19:13.375: INFO: Pod "downwardapi-volume-66ceb2df-f7e3-4818-8d9d-5ed7c48ccf99" satisfied condition "success or failure" Apr 22 22:19:13.379: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-66ceb2df-f7e3-4818-8d9d-5ed7c48ccf99 container client-container: STEP: delete the pod Apr 22 22:19:13.398: INFO: Waiting for pod downwardapi-volume-66ceb2df-f7e3-4818-8d9d-5ed7c48ccf99 to disappear Apr 22 22:19:13.442: INFO: Pod downwardapi-volume-66ceb2df-f7e3-4818-8d9d-5ed7c48ccf99 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:19:13.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1422" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":270,"skipped":4380,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:19:13.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 22 22:19:13.551: INFO: (0) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 5.202394ms) Apr 22 22:19:13.565: INFO: (1) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 13.605849ms) Apr 22 22:19:13.570: INFO: (2) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 5.25265ms) Apr 22 22:19:13.574: INFO: (3) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.520314ms) Apr 22 22:19:13.577: INFO: (4) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.013377ms) Apr 22 22:19:13.580: INFO: (5) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.28028ms) Apr 22 22:19:13.584: INFO: (6) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.560962ms) Apr 22 22:19:13.588: INFO: (7) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.519878ms) Apr 22 22:19:13.591: INFO: (8) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.335567ms) Apr 22 22:19:13.595: INFO: (9) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.844119ms) Apr 22 22:19:13.598: INFO: (10) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.634711ms) Apr 22 22:19:13.602: INFO: (11) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.6298ms) Apr 22 22:19:13.606: INFO: (12) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.831153ms) Apr 22 22:19:13.610: INFO: (13) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.917708ms) Apr 22 22:19:13.618: INFO: (14) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 8.33253ms) Apr 22 22:19:13.647: INFO: (15) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 28.36561ms) Apr 22 22:19:13.651: INFO: (16) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.926444ms) Apr 22 22:19:13.655: INFO: (17) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.874404ms) Apr 22 22:19:13.658: INFO: (18) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.403132ms) Apr 22 22:19:13.662: INFO: (19) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.661236ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:19:13.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5437" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":271,"skipped":4443,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:19:13.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-6fc6ac66-9113-4cef-bfa0-be824a986f41 STEP: Creating a pod to test consume configMaps Apr 22 22:19:13.733: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d800400c-666d-4e0a-bad1-4d33af908866" in namespace "projected-6289" to be "success or failure" Apr 22 22:19:13.778: INFO: Pod "pod-projected-configmaps-d800400c-666d-4e0a-bad1-4d33af908866": Phase="Pending", Reason="", readiness=false. Elapsed: 44.512869ms Apr 22 22:19:15.782: INFO: Pod "pod-projected-configmaps-d800400c-666d-4e0a-bad1-4d33af908866": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048658539s Apr 22 22:19:17.786: INFO: Pod "pod-projected-configmaps-d800400c-666d-4e0a-bad1-4d33af908866": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052319539s STEP: Saw pod success Apr 22 22:19:17.786: INFO: Pod "pod-projected-configmaps-d800400c-666d-4e0a-bad1-4d33af908866" satisfied condition "success or failure" Apr 22 22:19:17.789: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-d800400c-666d-4e0a-bad1-4d33af908866 container projected-configmap-volume-test: STEP: delete the pod Apr 22 22:19:17.823: INFO: Waiting for pod pod-projected-configmaps-d800400c-666d-4e0a-bad1-4d33af908866 to disappear Apr 22 22:19:17.827: INFO: Pod pod-projected-configmaps-d800400c-666d-4e0a-bad1-4d33af908866 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:19:17.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6289" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":272,"skipped":4450,"failed":0} SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:19:17.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-555946a5-f331-468a-9d35-f980fc190da3 STEP: Creating a pod to test consume secrets Apr 22 22:19:17.908: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6eb8d409-c072-41b4-94aa-8e5f33809caa" in namespace "projected-6336" to be "success or failure" Apr 22 22:19:17.917: INFO: Pod "pod-projected-secrets-6eb8d409-c072-41b4-94aa-8e5f33809caa": Phase="Pending", Reason="", readiness=false. Elapsed: 9.109036ms Apr 22 22:19:19.921: INFO: Pod "pod-projected-secrets-6eb8d409-c072-41b4-94aa-8e5f33809caa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012991258s Apr 22 22:19:21.926: INFO: Pod "pod-projected-secrets-6eb8d409-c072-41b4-94aa-8e5f33809caa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017411645s STEP: Saw pod success Apr 22 22:19:21.926: INFO: Pod "pod-projected-secrets-6eb8d409-c072-41b4-94aa-8e5f33809caa" satisfied condition "success or failure" Apr 22 22:19:21.929: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-6eb8d409-c072-41b4-94aa-8e5f33809caa container projected-secret-volume-test: STEP: delete the pod Apr 22 22:19:21.982: INFO: Waiting for pod pod-projected-secrets-6eb8d409-c072-41b4-94aa-8e5f33809caa to disappear Apr 22 22:19:22.002: INFO: Pod pod-projected-secrets-6eb8d409-c072-41b4-94aa-8e5f33809caa no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:19:22.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6336" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4455,"failed":0} SSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:19:22.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-4829, will wait for the garbage collector to delete the pods Apr 22 22:19:26.150: INFO: Deleting Job.batch foo took: 6.200529ms Apr 22 22:19:26.451: INFO: Terminating Job.batch foo pods took: 300.256306ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:19:59.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4829" for this suite. • [SLOW TEST:37.982 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":274,"skipped":4460,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:19:59.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 22 22:20:00.138: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f30da46e-f547-40ad-8e59-4d87c0ddd826" in namespace "projected-7568" to be "success or failure" Apr 22 22:20:00.142: INFO: Pod "downwardapi-volume-f30da46e-f547-40ad-8e59-4d87c0ddd826": Phase="Pending", Reason="", readiness=false. Elapsed: 3.505897ms Apr 22 22:20:02.146: INFO: Pod "downwardapi-volume-f30da46e-f547-40ad-8e59-4d87c0ddd826": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007440334s Apr 22 22:20:04.150: INFO: Pod "downwardapi-volume-f30da46e-f547-40ad-8e59-4d87c0ddd826": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011836862s STEP: Saw pod success Apr 22 22:20:04.150: INFO: Pod "downwardapi-volume-f30da46e-f547-40ad-8e59-4d87c0ddd826" satisfied condition "success or failure" Apr 22 22:20:04.153: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-f30da46e-f547-40ad-8e59-4d87c0ddd826 container client-container: STEP: delete the pod Apr 22 22:20:04.210: INFO: Waiting for pod downwardapi-volume-f30da46e-f547-40ad-8e59-4d87c0ddd826 to disappear Apr 22 22:20:04.225: INFO: Pod downwardapi-volume-f30da46e-f547-40ad-8e59-4d87c0ddd826 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:20:04.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7568" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":275,"skipped":4466,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:20:04.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium Apr 22 22:20:04.329: INFO: Waiting up to 5m0s for pod "pod-9318a60d-c098-4f72-9d39-2ea9d0f9beb7" in namespace "emptydir-2152" to be "success or failure" Apr 22 22:20:04.332: INFO: Pod "pod-9318a60d-c098-4f72-9d39-2ea9d0f9beb7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.487652ms Apr 22 22:20:06.345: INFO: Pod "pod-9318a60d-c098-4f72-9d39-2ea9d0f9beb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016219838s Apr 22 22:20:08.557: INFO: Pod "pod-9318a60d-c098-4f72-9d39-2ea9d0f9beb7": Phase="Running", Reason="", readiness=true. Elapsed: 4.228473462s Apr 22 22:20:10.561: INFO: Pod "pod-9318a60d-c098-4f72-9d39-2ea9d0f9beb7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.232027489s STEP: Saw pod success Apr 22 22:20:10.561: INFO: Pod "pod-9318a60d-c098-4f72-9d39-2ea9d0f9beb7" satisfied condition "success or failure" Apr 22 22:20:10.564: INFO: Trying to get logs from node jerma-worker2 pod pod-9318a60d-c098-4f72-9d39-2ea9d0f9beb7 container test-container: STEP: delete the pod Apr 22 22:20:10.594: INFO: Waiting for pod pod-9318a60d-c098-4f72-9d39-2ea9d0f9beb7 to disappear Apr 22 22:20:10.602: INFO: Pod pod-9318a60d-c098-4f72-9d39-2ea9d0f9beb7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:20:10.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2152" for this suite. • [SLOW TEST:6.378 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":276,"skipped":4470,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:20:10.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:20:27.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6457" for this suite. • [SLOW TEST:16.424 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":277,"skipped":4506,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 22 22:20:27.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 22 22:20:27.088: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 22 22:20:29.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2247 create -f -' Apr 22 22:20:32.479: INFO: stderr: "" Apr 22 22:20:32.479: INFO: stdout: "e2e-test-crd-publish-openapi-3473-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 22 22:20:32.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2247 delete e2e-test-crd-publish-openapi-3473-crds test-cr' Apr 22 22:20:32.570: INFO: stderr: "" Apr 22 22:20:32.570: INFO: stdout: "e2e-test-crd-publish-openapi-3473-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Apr 22 22:20:32.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2247 apply -f -' Apr 22 22:20:32.828: INFO: stderr: "" Apr 22 22:20:32.828: INFO: stdout: "e2e-test-crd-publish-openapi-3473-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 22 22:20:32.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2247 delete e2e-test-crd-publish-openapi-3473-crds test-cr' Apr 22 22:20:32.937: INFO: stderr: "" Apr 22 22:20:32.937: INFO: stdout: "e2e-test-crd-publish-openapi-3473-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 22 22:20:32.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3473-crds' Apr 22 22:20:33.214: INFO: stderr: "" Apr 22 22:20:33.214: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3473-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 22 22:20:36.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2247" for this suite. • [SLOW TEST:9.109 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":278,"skipped":4517,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSApr 22 22:20:36.144: INFO: Running AfterSuite actions on all nodes Apr 22 22:20:36.144: INFO: Running AfterSuite actions on node 1 Apr 22 22:20:36.144: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4564,"failed":0} Ran 278 of 4842 Specs in 4388.012 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4564 Skipped PASS